Saturday, February 28, 2015

Fujitsu's radically different data processing engine brings massive speed boost

Fujitsu has unveiled a new column-oriented data-processing engine that brings up to a 50-fold increase in data processing speeds across database systems.

The new engine, which runs on a PostgreSQL open-source database, to eliminate the problems that come from trying to reflect the updates or changes to row-oriented data in column-oriented data.

Without being dependent on memory capacity, the engine updates column-oriented data in response to changes in row-oriented data whilst still being able to process column-oriented data at a high speed. It does this by quickly analysing the indexes provided by most database systems and also allows developers determine easily whether a storage method is column-oriented or row-oriented.

The uplift in speed comes from a parallel-processing engine specifically suited to processing column-oriented data that allows analyses to run on a single CPU core to be conducted at four times the speed. For a server equipped with 15 CPU cores this translates to analyses that can be run at least 50 times faster.

Fujitsu is currently aiming for commercial implementation of the new technology during fiscal 2015 as part of its Symfoware Server database product.

Via: Fujitsu


View the original article here

In Depth: iPhone through the ages: just how much has it changed?

It was January 2007 when Steve Jobs took to the stage of the Moscone Center San Francisco to announce the arrival of the iPhone, which went on sale worldwide later that year.

If you find it difficult to remember that far back, Leona Lewis was number one in the UK with A Moment Like This and people were flocking to the cinema to get teary-eyed at Will Smith in The Pursuit Of Happyness.

While our pop music and movie choices may not have improved much, smartphones were changed forever: from that point on, touchscreens, apps and digital media were the way forward.

Launched: June 2007 (US), November 2007 (UK)

iPhonePart iPod, part phone, part Internet device: the original 2007 iPhone.

Steve Jobs introduced the iPhone as three devices in one: a touchscreen iPod, a revolutionary mobile phone, and a truly mobile web browser.

Now we take touchscreens, digital media playback and Web access for granted, but in 2007 the iPhone was unlike anything that had appeared before. Its 3.5-inch screen had a 320 x 480 pixel reoslution (one of the best displays of the time), with a 2MP camera built-in, and up to 8GB of storage.

Third-party apps were not yet allowed on "iPhone OS". In the TechRadar review, we noted that despite several shortcomings, the phone had "changed the mobile device landscape... multitouch will prove to be a model for interfaces in the future."

Launched: July 2008

iPhone 3GThe second iPhone model brought with it 3G connectivity, but was very similar to the original

High-speed connectivity was big news in 2008, which is why the second generation iPhone included 3G in its moniker (rather confusingly, as this was the second generation iPhone). It also brought with it a thinner shape, a plastic back and - crucially - support for the newly launched App Store.

The app store model worked so well you'll now find it replicated in everything from your smart TV to yourWindows 8 laptop, and the change helped Apple's phone really start to gain traction.

We said in our iPhone 3G review promised that buyers would be "amazed by the function and feel of this handset." The iPhone era had begun in earnest.

Launched: June 2009

Video recording came to the iPhone with the launch of the 3GS model Video recording came to the iPhone with the launch of the 3GS model.

The iPhone 3GS upgrade was viewed as disappointingly minor at the time, but look at the detail and a different picture emerges: as well as faster performance, the new handset offered a better 3.2MP camera (that could now record video as well as take photos), extra storage options and voice control (the precursor to Siri).

The display was the same 3.5-inch 320 x 480 screen, and the device's appearance remained largely unchanged from the 3G model. TechRadar's take on the unit praised the multimedia and internet capabilities while still finding niggles with the camera, call quality and battery life – this was the first of the more iterative updates to the iPhone but did enough to keep users happy.

Launched: June 2010

The iPhone 4 transformed the look and display of Apple s flagship device The iPhone 4 transformed the look and display of Apple's flagship device.

If the 3GS was a minor upgrade, the iPhone 4 was a serious step up - a new, flat design with an integrated antenna (although questions were raised about how you held the device) a high-resolution Retina display (640 x 960 pixels) that showed the rest of the world how it was done and a superior 5MP camera (featuring HD video recording) on top of internal performance improvements.

The competition was catching up, and Apple had responded in brilliant fashion. We were certainly impressed, despite some reservations about the high price, saying "it's intriguing to see record-breaking numbers queuing up to pick up this device - but after playing with it for a few days, you can see why."


View the original article here

Opinion: What wearables need to do to make me click 'buy'

Wearable technology exists to satisfy strange curiosities. We're not talking "anime body pillow" levels, but more to the extent that it's a little odd to want to a camera mounted to your eyeglasses. Wearables aim to fill technological voids where voids don't exist, by putting computers into watches and vibration motors into bangles and earrings. There's no doubt retrofitting these "dumb" gadgets with cutting-edge improvements adds new dimensions of functionality and fashion, but is any of it necessary?

Necessity may be the mother of invention, but wearables somehow exist without it. By veering from this traditional development path, many weird proofs-of-concept have rushed to market. But once smart companies begin to get behind the idea, will wearables have their chance to find solid ground?

Lumo liftCan't see it? The Lumo Lift is that little square

Even if they do stick the landing, I'm still not sure that I can be convinced to buy in. That's not to say I haven't been tempted by a few wearables. My colleagues have caught me in a cold sweat before, hovering over the "buy" button for a Pebble smartwatch several times. I've somehow yet to buckle.

As someone who absolutely loves tech, I want to love wearables. But there are still a few pesky things that make resisting them all too easy.

Wearables are misguided at the moment. After all, the most alluring hook going for them isn't utility - their biggest selling point - but instead stunning design. Taking a few hints from skilled crafters of watches, fine jewelry and eyeglasses - and in the case of the Misfit Swarovski Shine, partnering with them - tech companies are pumping out some seriously attractive products.

LG Watch UrbaneJust a dapper dude and his LG Watch Urbane

Just rattling off a few examples, the LG Watch Urbane, Apple Watch and Google Glass are each delectably crafted and fashionable. And their efforts to break into the fashion-savvy world haven't gone unnoticed. I too enjoy stellar design as much as the next person, but it doesn't distract me from seeing these wearables for what they are: technologies that have come to market bass ackwards. Maybe I'm just strange, but I require function before form and too few wearables have just that.

Wearables aim to ease modern struggles by minimizing the effort and time gap that exists between you and your information. As superficial as that seems on paper, it's easy to get sucked into the idea that wearables will make your life easier. To an extent, some of them do. A product that simplifies a pesky sequence of movements is convenient, but that doesn't necessarily make it innovative.

Smartwatches like the Moto 360 and the upcoming Apple Watch each claim to bring new functionality to the table that smartphones aren't capable of providing. Aside from biometric readings, smartwatches primarily act as a vessel to push smartphone notifications straight to your wrist, eliminating the need to fish your phone out of your pocket.

Sony SmartEyeglass AttachCameras attached to anything is a solid value proposition

Other devices like the Sony SmartEyeglass Attach look to streamline that process even further by abandoning the wrist and moving straight to your face, pushing the info straight to your eyes by integrating a camera and screen into an ordinary-looking set of glasses. The camera nixes the need to yank your phone out to snap a pic and the screen discretely displays notifications.

On a more fashionable note, Kovert Designs makes smart jewelry called Altruis. The concept of putting micro-electronics like a vibration motor inside of a gold necklace is novel, but ultimately each piece of jewelry costs more than $400 and does nothing beyond notify you of alerts on your phone.

Convenience alone is enough to sell some, but I can't be bothered to invest in these expensive devices that only shave just a couple seconds away from the process of checking my phone for notifications or taking photos.

No wearables display a mastery of balancing function and fashion, but a few show promise. So few, that the first company to strike gold with that perfect brew may leave the competition very far behind.

Pebble TimePebble Time keeps it simple

However, until then, the scramble of experimentation will continue. And while I think that only good can come out of this painful but necessary iterative trial period in the evolution of wearables, it's just another reason I talk myself down from investing in wearables in the first place.

Am I suggesting that we should all stop buying wearables? Of course not. It's one of the sectors in technology that I'm most excited about for the future because it is uniquely positioned to help keep us healthy and entertained in fun, new ways. But in its current form, it needs work before I hit the "buy" button.


View the original article here

In Depth: Pebble Time release date, news and features

The original Pebble wasn't even really a "smartwatch" when it debuted but it revolutionized wearables and set the standard for future smartwatches to come.

The Pebble Steel further cemented the company's standing as a legit smartwatch competitor by creating a sleeker model, but it didn't fully impress with usage.

Now, with Pebble Time, it seems like the public cries were heard, and most issues with the previous watches have been ironed out.

It's still a couple months away but we're excited to get our hands on the newest wearable and we're betting you are, too. In the meantime, here's everything we know about the Pebble Time.

The Pebble Time will start shipping in May to Kickstarter backers, which puts it a month behind the Apple Watch release date. So far it's unclear when Time will show up on retailer doorsteps but expect it to soon afterwards.

Pricing starts at $199 (£149, AU$199) which is the same as the Pebble Steel. However while the Kickstarter is still running you can grab the Time for $179 (about £115, AU$227).

It's far cheaper than the $250 (£200, AU$330) Moto 360 and the LG G Watch R, which costs $270 (£200, about AU$400). The iWatch will be the priciest wearable of the bunch, and starts at $349 (likely north of £223, AU$403).

The full array of specs haven't been released but a recent Reddit AMA revealed a few gems.

The Pebble Time will have 64KB of RAM on board and an ARM Cortex M4 CPU running at 100Mhz, an upgrade from the original Pebble's M3. The additional processing power is essential in supporting a new microphone feature.

Full specs will be released later on alongside the Pebble Time SDK.

Four sensors built are built into the smartwatch, including a 3 axis accelerometer, 3D compass, ambient light sensor and the aforementioned microphone.

Pebbles are known for outstanding battery life and the Time is no exception. Like the previously used e-ink displays, the new color e-paper screen helps reduce power consumption, allowing the battery to last for seven days before the next recharge.

The e-paper interface also utilizes a color palette that's limited to 64 colors, yet it looks like it will be far easier to read than the usual smartwatch displays. The company also kept the physical buttons opposed to incorporating touchscreen capabilities.

Pebble TimeAre YOU excited?

Bluetooth is of course included in the Time package with range supposedly better than the other Pebbles, reaching up to 50 meters or more.

It's been noted the Time is similar in design to the Steel but even better. At just 9.5mm, it's 20% thinner than the original Pebble.

Similar to the Samsung Gear S, the scratch-resistant Gorilla Glass watch body will be curved, though it will be much smaller and fit more comfortably on the wrist. The bezel is made of stainless steel and will have three different watch case colors - black, red and white - with a black bezel for the former two colors and silver for the latter.

The Time will also come with a silicone watch band in the same colors above to match the watch case, but any 22mm strap can be swapped in thanks to a quick release pin.


View the original article here

Latest Samsung Ativ Book 9 lands March 1 for a pretty penny

We were pretty impressed with the 2015 Samsung Ativ Book 9 when we went hands-on with it at CES back in January, and now the laptop is at last going on sale.

The latest Ativ Book 9 will hit shelves this Sunday, March 1, at two price points: $1,200 (about £780, AU$1,530) for 4GB of memory and 128GB of solid state storage, or $1,400 (about £900, AU$1,800) for 8GB and a 256GB SSD.

Samsung says the Windows 8.1 laptop is its thinnest yet in the Book 9 line, and its slim, stylish design is certainly one of its selling points.

But it also packs a powerful Intel Core M processor, 2560 x 1600 resolution and 12.5 hours of battery life, making it a versatile little 12.2-inch book.

Worth the money? That's for you to decide this weekend.

We've asked a Samsung spokesperson for the 2015 Ativ Book 9's international release info, and we'll update you here if we hear back.


View the original article here

BlackBerry 'Rio' flaunts its figure in leaked pics

We first learned of the BlackBerry "Rio" in 2014, but the latest leak indicates that it might not be the high-end "savior" we originally thought.

Word today from N4BB is that the phone code-named Rio will actually be called the BlackBerry Leap, and that it will have mid-range specs and - hopefully - an affordable price tag.

These alleged leaked images show a fairly stylish device, too.

According to the site, the BlackBerry Leap will rock a 720p 5-inch display, 8- and 2-megapixel cameras, a 2800mAh battery, dual-core 1.5GHz MSM 8960 chip, 2GB of memory, 16GB of storage, and microSD support.

BlackBerry LeapThe BlackBerry Leap

They don't have any info on the price, but the site says it will be released in April or May.

It would make sense for BB to debut the Leap at MWC 2015, but until then these leaked images will have to do.


View the original article here

Friday, February 27, 2015

Review: Lenovo Yoga Tablet 2 with AnyPen

Capitalizing on the stylus craze to give tablet owners more precision input, Lenovo asks users of its $299 (£195, AU$385) Yoga Tablet 2 with Windows to not only touch and poke at the screen, but to key it, stab it, and slash it with almost any metal object. Though Lenovo is merely iterating on its Yoga Tablet design, the real highlight - and really what distinguishes the tablet from others in the crowded space - is its AnyPen technology.

With AnyPen, the Yoga Tablet 2 owners benefit from the finer accuracy of a digital stylus, but with the convenience of being able to use most everyday objects as a pen. Rather than carrying a specialized digital inking device that could get lost or stolen, AnyPen lets you create your own makeshift stylus.

Lenovo hopes that the convenience of AnyPen will help the Yoga Tablet 2 command a premium price. The Yoga Tablet 2 is priced higher than the $150 (£100, AU$195) 8-inch Dell Venue 8 Pro with an optional Active Stylus, but Dell's advantage is that you can add a folio and compact keyboard with physical keys to turn the slate into a netbook. Those who prefer Android and need pen-enabled support can opt for the $330 (£215, AU$425) Samsung Galaxy Note 8.

Without stylus support, pricing for Windows tablets with screens eight-inch or under drop to below $200 (£130, AU$260). Options in this spectrum include the $149 (£100, AU$190) Asus VivoTab 8, the $79 (£55, AU$100) 7-inch HP Stream 7 , and the $179 (£115, AU$230) HP Stream 8 with a built-in 4G modem. If you're happy with iOS, Apple's $399 (£260, AU$510) iPad mini 3 is a great choice.

Measuring 8.27 x 5.87 x 0.28 inches or 210 x 149 x 7 mm (W X L X H), the Yoga Tablet 2 is an extension of Lenovo's Yoga vision in offering customers a single device that transforms into different form factors.

Lenovo Yoga Tablet 2 review

Like the first generation Yoga Tablet, the Tablet 2 with Windows sheds the 360-degree hinged keyboard from Lenovo's Yoga Ultrabook series. Cloaked in black, you're presented with the familiar slim design, barreled edge that is home to a flip-out kickstand (and the battery inside), and metal flourishings. Although the sides, barrel, and kickstand are constructed from metal, the backside is made of textured, matte plastic.

A crisp 8-inch, full HD, 1080p IPS display graces the front of the tablet. Because of the barreled edge, the tablet feels more balanced in landscape mode when used on a flat surface. In this position, the rear of the tablet is elevated while the front edge is lower, making it more comfortable to look down on the screen when you're sitting at your desk and easier to type on the touchscreen.

Lenovo Yoga Tablet 2 review

In portrait mode on a desk, the barrel creates an elevated spine that prevents the tablet from fully laying flat. As a result, you're left with an inclined side, which is fine for casual web surfing and reading, but makes typing awkward.

To make the tablet slim, Lenovo relies on the barrel for several functions. The barrel houses a pair of front-facing, Dolby-tuned speakers. As this is the thickest point on the tablet, it provides more space for the speakers to produce richer sound.

The battery is housed in the barrel as well to keep the overall tablet slim. Lenovo also placed the rear 8-megapixel camera on the barrel. Additionally, the barrel serves as a hinge to stow the mechanical kickstand.

Lenovo Yoga Tablet 2 review

The metal kickstand is activated when you apply force to push down. This opens up the kickstand and you can then pry the stand fully open. The kickstand allows the tablet to be used in four modes.

According to Lenovo, with the stand closed, you can hold it like a tablet. With the stand engaged, you can stand it up similar to the larger Microsoft Surface Pro 3. You can tilt the tablet on a desk, so it's propped up for easier viewing and more comfortable on-screen typing.

Finally, you can fully open the stand, revealing a small hole in the center of the kickstand that allows you to hang the tablet. This last mode is great if you want to to hang the tablet in a workspace so you can watch videos or multitask.

Lenovo Yoga Tablet 2 review

As a tablet, the barrel also serves an ergonomic purpose, making the Yoga Tablet 2 with Windows comfortable to hold for long periods of time. In use, it feels like wrapping the cover of a paperback book around the spine.

Coupled with the tablet's light 0.94-pound (0.43kg) weight, it makes for a very pleasant companion to read an e-book on the couch or in bed. However, magazines, PDFs, and larger format materials will feel cramped on an 8-inch screen with a 16:9 aspect ratio.

The weight of the Yoga Tablet 2 with Windows is comparable to the 0.87-pound (0.39kg) Dell Venue 8 Pro, and is about the same weight as the 0.96-pound (0.44kg) iPad Air 2, though Apple's device has a larger 9.7-inch display. The nice thing about the Lenovo slate is that it feels balanced; when holding the tablet in bed, I never felt like the tablet would fall and smack me in the face.

The Yoga Tablet 2 comes with a minimum array of buttons and ports. Neatly fit on one end of the barrel is a circular power button. The button is surrounded by an LED ring, which lights up when the tablet is plugged in for charging.

Lenovo Yoga Tablet 2 review

The other end of the barrel is home to a 3.5mm headphone jack. A slim Windows button sits on the tablet's bezel, along with a single micro USB port and volume rocker on its side.

Unlike many other Windows slates, the placement of the Windows Start button on the side of the tablet makes it awkward, especially when used in portrait mode. For right-handed users holding the slate in their left hand, the Start button will be on the bottom edge of the device, making it difficult to reach.


View the original article here

Friday, February 20, 2015

Computational Linguistics Reveals How Wikipedia Articles Are Biased Against Women

Despite well-publicized efforts to promote equality, Wikipedia articles are deeply biased against women, say computer scientists who have analysed six different language versions of the online encyclopedia.


One of Wikipedia’s more embarrassing features is that its workforce is dominated by men. Back in 2011, the New York Times reported that only 13 percent of Wikipedia’s editors were women compared to just under half its readers.

As a consequence, the Wikimedia Foundation, which runs the website, set itself various goals to change this gender bias including the target of increasing its proportion of female editors to 25 percent by 2015. Whether that will be achieved is not yet clear.

In the meantime, various researchers have kept a close eye on the way the gender bias among editors may be filtering through to the articles in the encyclopedia itself. Today, Claudia Wagner at the Leibniz Institute for the Social Sciences in Cologne, Germany, and pals at ETH Zurich, Switzerland and the University of Koblenz-Landau, say they have found evidence of serious bias in Wikipedia entries about women, suggesting that gender bias may be more deep-seated and engrained than previously imagined.

Wagner and co begin by comparing six different language versions of Wikipedia with three databases about notable men and women. These databases include Freebase, a database of 120,000 notable individuals, and Pantheon, a database of historical cultural popularity compiled by a team at MIT.

Wagner and co are interested in the proportion of men and women in these databases that are also covered by Wikipedia. And the results make for good reading at Wikipedia.

Wagner and co say that Wikipedia comes out well by this measure and, if anything, women are overestimated in all the language editions they studied. “We find that women on Wikipedia are covered well in all six Wikipedia language editions,” they say.

What’s more, the team also looked at the proportion of articles about men and women that appear on the start page of the English Wikipedia and say that this does not favor one sex over the other. “The selection procedure of featured articles of the Wikipedia community does not suffer from gender bias,” they conclude.

By these measures Wikipedia is doing well. “These are encouraging findings suggesting that the Wikipedia editor community is sensible to gender inequalities and participates in affirmative action practices that are showing some signs of success,” report Wagner and co.

But there are other signs of a more insidious gender bias that will be much harder to change. “We also find that the way women are portrayed on Wikipedia starkly differs from the way men are portrayed,” they say.

This conclusion is the result of first studying the network of connections between articles on Wikipedia. It turns out that articles about women are much more likely to link to articles about men than vice versa, a finding that holds true for all six language versions of Wikipedia that the team studied.

More serious is the difference in the way these articles refer to men and women as revealed by computational linguistics. Wagner and co studied this counting the number of words in each biographical article that emphasize the sex of the person involved.

Wagner and co say that articles about women tend to emphasize the fact that they are about women by overusing words like “woman,” “female,” or “lady” while articles about men tend not to contain words like “man,” “masculine,” or “gentleman.” Words like “married,” “divorced,” “children,” or “family” are also much more frequently used in articles about women, they say.

The team thinks this kind of bias is evidence for the practice among Wikipedia editors of considering maleness as the “null gender.” In other words, there is a tendency to assume an article is about a man unless otherwise stated. “This seems to be a plausible assumption due to the imbalance between articles about men and women,” they say.

That’s an interesting study that provides evidence that the Wikimedia Foundation’s efforts to tackle gender bias are bearing fruit. But it also reveals how deep-seated gender bias can be and how hard it will be to root out.

The first step, of course, is to identify and characterize the problem at hand. Wagner and co’s work is an important step in that direction that will allow editors at Wikipedia to continue their vigilance.

Ref: arxiv.org/abs/1501.06307  It’s a Man’s Wikipedia? Assessing Gender Inequality in an Online Encyclopedia


View the original article here

How a Box Could Solve the Personal Data Conundrum

Software known as a Databox could one day both safeguard your personal data and sell it, say computer scientists.

One of the trickiest issues for anyone with an online presence is how to manage personal information. Almost any form of surfing leaves a data trail that advertisers, social networks and so on can use to their advantage.

This data gold rush is largely driven by the dominant online business model in which advertising is the primary source of revenue. The gathered data can sometimes be processed in a way that individuals find useful. But this information can also be abused, sometimes with severe consequences, as anyone who has suffered identity theft will testify.

What’s more, information can fall into the hands of companies almost by default, regardless of the wishes of the owner. For example, Google scans the contents of all e-mails on its Gmail service.

Of course, people can choose to use a different service if they object to this. But they will find it much harder to avoid other people with Gmail accounts. Send them an e-mail and Google will scan the contents anyway.

The options for avoiding these scenarios are not good. The ultimate possibility is opting out of the online world but that is simply not viable for most people. So what to do?

Today, Hamed Haddadi from Queen Mary University of London and a few pals from the University of Cambridge put forward their own manifesto for solving this problem. These guys say the solution is a piece of software that collects personal data and then manages how the information is made available to third parties.

Haddadi and co call this software a Databox and suggest that it could kickstart a new generation of business models in which both individuals and companies profit from the personal data revolution.

The basic idea behind the Databox is that it is a networked service that collates personal information from all of your devices and can also make that data available to organizations that the owner allows. This piece of software must have a number of important attributes.

First, it must be trusted by the individual who uses it. That’s a big ask. The Databox will gather information about browsing habits, buying behavior, financial details such as bank statements, e-mail and social media contacts as well as calendar entries and so on. To allow all this all to be stored in a single online repository will require remarkable act of faith for most people. Ensuring the security of a Databox is therefore a crucial requirement.

But the owner of the data is not the only one who needs to share this trust. Any company or organization that accesses the data must also have faith that it is reliable, something that will require third-party auditors who can verify that the system is operating is expected.

As well as gathering personal information, the Databox must allow controlled access to it. So third parties must be able to selectively query any information that the user allows them access to. At the same time, the user must be able to control how this data is accessed and be able to change the settings when necessary.

Finally, there must be incentives for all those involved to use the Databox. For example, ordinary people may be more likely to use the service if it contains a mechanism that allows third parties to pay for using the data.

It may also provide an incentive for third parties by reducing their exposure to sensitive data, such as health records. For example, an organization may need access to health data but not want the cost and responsibility of storing it securely. “An analogy might be the way online stores use third-party payment services such as PayPal or Google Wallet to avoid the overhead of Payment Card Infrastructure compliance for processing credit card fees,” say Haddadi and co.

That’s an interesting idea but one that faces numerous hurdles before it can come into being. Not least of these is whether there will be sufficient demand for a service like this and whether it can pay for itself. Then there are the challenges of dealing with widely differing data sources and the problem of getting access to proprietary devices such as iPhones.

It may be that governments will have a role to play in creating a regulatory landscape in which this kind of service can flourish. But for the moment, the future is far from certain.

That’s not stopping these guys from dreaming. Many of the authors of this manifesto are involved in a highly ambitious project called Nymote, which is building a software infrastructure that allows people to take control of their digital lives—a Databox in all but name.

It’s an area that is certainly worth watching. After the revelations in recent years about government-sponsored snooping, it’s not worth betting against the possibility of a Databox-like service becoming ubiquitous.

Ref: arxiv.org/abs/1501.04737 : Personal Data: Thinking Inside the Box


View the original article here

How the Next Generation of Botnets Will Exploit Anonymous Networks, and How to Beat Them

Computer scientists are already devising strategies for neutralizing the next generation of malicious botnets .

Botnets are computer programs that talk to each other over the Internet. Some are entirely benign, like those that control Internet chats. But many botnets are entirely malicious, programs that send spam or participate in denial of service attacks and so on. These networks are controlled by individual criminals who use them for nefarious purposes such as generating illicit income or attacking other websites.

The work of finding and stopping this criminal activity has become a global endeavor. The first generation of botnets was relatively simple to stop. Since they were controlled by a single computer somewhere on the Web, the trick was to find that computer and shut it down.

That was straightforward when the programs themselves contained the information necessary to communicate with the command and control server.

But in recent years, this cat and mouse game has become much more sophisticated. Botnets now routinely take steps to hide the location of the command and control server. One approach, known as fast fluxing, is to create a constant stream of IP addresses and map hundreds or thousands of them simultaneously to a domain name. Anybody hoping to find the command and control server would have to search every IP address before it changes.

More recently, botnets have begun to exploit the Tor network which is designed to allow people to communicate across the Internet anonymously. This, combined with the advent of untraceable electronic currencies such as Bitcoin, has led to the rise of blackmail and ransomware that cannot be traced even after a payment has been made.

Today, Amirali Sanatinia and Guevara Noubir at Northeastern University in Boston say the next generation of botnets is likely to be even more sophisticated. They outline how they believe these botnets will evolve but also suggest a straightforward way to neutralize them.

Sanatinia and Noubir say that the anonymity offered by Tor-like networks will be irresistible to botnet masters, so most innovation will occur in this area. To exploit this anonymity, these botnets will have to exploit a technique called onion routing that encapsulates messages within various layers of encryption, like the layers of an onion.

Each server that the message passes through decrypts a layer of the onion revealing its next destination. When the final layer is revealed, the message has reached its destination. The anonymity comes from the fact that no server along the route knows anything about the message except its next destination.

Sanatinia and Noubir clearly think this level of anonymity will be hard for botnet masters to resist. Consequently, they christen the next generation of botnets that will exploit this OnionBots and spend some time explaining exactly how they will have to work to make best use of onion routing.

That sounds suspiciously like a big step towards disaster—the paper is a useful backgrounder for anyone wanting to set up an OnionBot. However, Sanatinia and Noubir have also found a way to neutralize these kinds of OnionBots.

The basic idea is to inject programs into the network that preferentially attach to OnionBots. They then reproduce themselves and effectively surround each OnionBot so it is no longer connected to any other part of the network. When that happens, the OnionBot is isolated and neutralized.

That’s not to say that it is possible to completely protect against an attack of OnionBots. But Sanatinia and Noubir hope to kick-start work on tackling this next generation of bots before it even gets started. “There are still many challenges that need to be preemptively addressed by the security community, we hope that this work ignites new ideas to proactively design mitigations against the new generations of crypto-based botnets,” they say.

It may be a risky strategy to do this so publicly. On the other hand, a public approach may tap into the broadest pool of security talent. Suggestions about how to improve this strategy in the comments section below.

Ref:  arxiv.org/abs/1501.03378 : OnionBots: Subverting Privacy Infrastructure for Cyber Attacks


View the original article here

The Facebook Page That Posts the Same Picture Every Day

A Facebook page that posts the same picture of an Italian singer every day has become the central part of a research project to understand how we use social media.


On the Italian language version of Facebook, a curious page came into existence in latter half of 2014. This page is devoted to Toto Cutugno, an Italian singer and song-writer who is famous in the part of the world. Every day, the owner of this page posts a picture of Cutugno, indeed, the same picture every day.

This page is imaginatively entitled “La stessa foto di Toto Cutugno ogni giorno” meaning “The same photo of Toto Cutugno every day”. And it’s easy to imagine that it might attract little attention from the Facebook crowd.

Not so. The posts on this page have received almost 300,000 likes, 14,000 comments and been shared more than 7000 times. It’s fair to say it has a cult following, despite the homogeneity of its content.

That has piqued the interest of Alessandro Bessi at the Institute for Advanced Study in Pavia, Italy and few pals, who have made this page a central part of their research. These guys have been studying the way people consume content on social media sites and say that the Toto Cutugno page is the perfect control for their experiments.

The reasons are straightforward. Most of the pages this team is interested in host a wide range of different posts. These posts attract varying levels of interest in the form of likes, shares and comments. But exactly how this interest depends on the content is hard to tease apart.

What’s needed is a control page that always posts the same content so that the team can assess the background level of interest. Enter “La stessa foto di Toto Cutugno ogni giorno”. This page allows the team to see how users interact with a page when the content is held constant.

Bessi and co have compared the way people use this page with 73 public Facebook pages, 34 of them about science and 39 of them about conspiracy theories. They start by counting the number of likes for that each user gives to the pages visited. It turns out that this produces a power law or heavy-tailed distribution in which most users give a small number of likes while a few dish out a very large number. Curiously, the users of all the pages—the conspiracy theory pages, the science pages and the control page—follow this same pattern.

The team also looked at the lifetime of these users—for how long they continued to like pages. Once again the users of all pages follow a similar pattern, although users of the control page show some differences, such as more who continue to like the page for very long periods.

The big difference between the way people use these pages is in the number of likes each post receives. This follows a heavy tailed distribution for the conspiracy and science pages—so most pages receive a few likes but a few receive a great deal of them.

One of the features of this kind of distribution is that the concept of an “average” page makes no sense. In a normal distribution, the average coincides with the peak of the curve and so represents an important quantity about the data. This is useful when talking about the average height of adult men and women or their average weight and so on.

But there is no peak in a heavy-tailed distribution so the concept of an average makes no sense. That’s why people never talk about an average-sized earthquake or forest fire or flu outbreak, the sizes of which all follow heavy-tailed distributions.

That’s relevant because the way people “like” posts on the control page is entirely different to the way they “like” other posts. This follows something like a normal distribution in which there is a clear average number of likes.

That reveals something important about the way people like the control page—that the number of likes for any post clusters around some average value.

Just why this is the case, Bessi and co do not say. But it’s easy to speculate. The “La stessa foto di Toto Cutugno ogni giorno” page obviously attracts fans of the eponymous Cutugno, who are clearly limited in number. It is this limit that prevents any posts receiving the very large numbers of likes that are characteristic of a heavy tailed distribution.

Bessi and co go on to simulate the way people like each type of page and say they can capture this behaviour with a relatively simple model. “We show that the proposed model is able to reproduce the phenomenon observed from empirical data,” they say.

Just how useful this will be for future studies isn’t clear.  Nevertheless, Cutugno himself must be flattered at the scientific attention his image has generated.

Ref: arxiv.org/abs/1501.07201  Everyday the Same Picture: Popularity and Content Diversity


View the original article here

BitTorrent Tests Websites Hosted in the Crowd, Not the Cloud

An experimental browser shows how peer-to-peer technology can serve up entire websites, not just individual files.

An experimental new Web browser makes it possible for sites to be hosted not on a company’s servers but, instead, by a shifting crowd of individuals on their personal computers. That turns the usual approach to serving up websites on its head and could provide a more effective and reliable way to disseminate bulky media files or distribute vital information in the event of natural disaster.

The new approach is being tested by BitTorrent, the company behind the file-sharing protocol of the same name. It has developed a modified version of Chromium, the open-source version of Google’s Chrome browser. The effort is known as Project Maelstrom.

How People Will Use the Apple Watch

Developers and designers debate whether the Apple Watch will find its purpose.

Thursday, February 19, 2015

Automating the Data Scientists

Software that can discover patterns in data and write a report on its findings could make it easier for companies to analyze it.

Whether your business is fighting cancer, serving online ads, or governing a country, employees who can dissect and explain complex data have become indispensable.

Now researchers backed by Google are developing software that could automate some of the work performed by such data scientists, in hopes of making sophisticated data skills more widely available. When fed raw data, the “automatic statistician” software spits out a report that uses words and charts to describe the mathematical trends it finds.

Recommended from Around the Web (Week Ending February 14, 2015)

A roundup of the most interesting stories from other sites, collected by the staff at MIT Technology Review.

The Definition of a Dictionary
Merriam-Webster considers a dictionary that lives only on the Web—a project that could give new meaning to the word “unabridged.”
—Linda Lowenthal, copy chief

How One Stupid Tweet Blew Up Justine Sacco’s Life
With social media, everyone can be infamous for 15 minutes. The New York Times checks in with some people whose minutes are up.
—Linda Lowenthal

Tough Old Birds
The solar-weather satellite SpaceX launched this week will replace one of many important but elderly satellites, as the Economist illustrates with a handy chart.
—Mike Orcutt, research editor

Why Do Many Reasonable People Doubt Science?
Not much new ground broken here, but it’s smart and well-written nonetheless.
—Brian Bergstein, deputy editor

Me, Meet Virtual Me
Not just gamers are excited about advances in virtual reality: neuroscientists and psychologists say it can help treat post-traumatic stress disorder and make people more healthy.
—Tom Simonite, San Francisco bureau chief

Photographer Imagines What It Looks Like to Run for Your Life
A New York photographer convinces 20 men to run as if their life depended on it.
—J. Juniper Friedman, associate Web producer

Why Google Glass Broke
Good backstory on Google Glass and why it flopped.
—Antonio Regalado, senior editor, biomedicine


View the original article here

Big Mountains, Big Data: How Technology Helps Push the Boundaries of Human Endeavor

“Men wanted for hazardous journey. Low wages, bitter cold, long hours of complete darkness. Safe return doubtful. Honour and recognition in event of success.”

That was the ad that Sir Ernest Shackleton is said to have placed in a London newspaper to recruit a crew for a 1914 expedition to Antarctica.

Human beings are naturally obsessed with great adventures, especially those in which the risks are formidable, the odds of success are slim, and a great story lies at the end of it all. Reach the poles, sail the Pacific on a balsa raft, climb Mount Everest — the list of such feats is long indeed. But as awe-inspiring as such exploits might have been, they often ended badly. Crews faced hypothermia, scurvy, dehydration, starvation, and more; death dogged them at every corner. For too many, the warnings in the Shackleton ad came true.

Technology Transforms Exploration

Fast forward 100 years to a time when conditions have changed dramatically, thanks largely to advances in technology. While the actual physical challenges remain about the same as before, our ability to deal with them and to survive to tell the story has increased considerably. From highly accurate tracking and measuring devices to near-total global telecommunications coverage (plus dramatically improved food and protective clothing), modern-day explorers have it much easier than their earlier counterparts.

Take Mount Everest expeditions. One of the most famous attempts on Everest was undertaken by British mountaineers George Mallory and Andrew Irvine in 1924. “Perfect weather for the job,” Mallory wrote on June 7, 1924, the day before he and Irvine left for Everest’s summit. They were never seen again. Mallory’s body was found in 1999; 90 years later, Irvine’s is still missing.

Recalling that expedition, the UK’s Guardian newspaper described the climbing attire and gear of the time in these words: “Protected from appalling weather and low temperatures by tweed and cotton, their legs bound in puttees and their feet always half-freezing in inadequate boots, climbers were experimenting on the fringes of human tolerance.”

In addition, early explorers were cut off from all communication while on the mountain. When Sir Edmund Hillary of Britain and Tenzing Norgay of Nepal became the first to summit the world’s highest peak on May 29, 1953 (or, as Hillary put it, they “knocked the bastard off”), the report of their conquest was first hand-delivered by a runner to a Nepalese village, eventually making its way to England by radio and telegraph. The news arrived in London just in time to coincide with the era’s blockbuster social event, the coronation of Queen Elizabeth II, on June 2.

Today, 86 percent of Nepal’s citizens use cell phones, up from just 15 percent in 2008.

Contrast this with the South Pole trek that British polar adventurer Ben Saunders and his teammate Tarka L’Herpiniere undertook in 2013, following explorer Robert F. Scott’s route of a century earlier. Their gear included mobile satellite hubs, freeze-proof laptops, portable solar panels, and a variety of movies and TV shows (everything from Love Actually to Breaking Bad). Saunders blogged regularly from Antarctica, and he also posted updates, pictures, and videos on YouTube, Twitter, Facebook, and other social media channels.

Today, Mount Everest ascents have become an industry, with numerous guide outfits offering deep-pocketed adventurers the trophy of a lifetime. In 2015, median expedition costs are north of $57,000 per climber, and expeditions now regularly haul routers and satellite terminals to base camp (at nearly 18,000 feet). Not to be outdone, telecom companies such as Nepalese cell provider Ncell and global giants like Huawei and China Mobile provide full 4G service on the mountain. Dubai-based Thuraya even provides a sleeve that converts a standard smartphone into a satellite phone.

Meanwhile, cell-phone penetration is rapidly increasing in Nepal. Today, 86 percent of Nepal’s citizens use cell phones, up from just 15 percent in 2008, according to a December 2014 report from the Nepal Telecommunications Authority. With the telecom infrastructure in place, it’s only a matter of time before both western climbing expeditions and local Sherpa communities start taking greater advantage of these technologies. As just one example, they might gather real-time weather data to keep expeditions better informed about changing conditions.

High-Tech Safety Improvements

Although the Himalayas are hundreds of miles inland, they are directly affected by storms that originate in the Bay of Bengal. In May 1996, one rogue storm killed eight people on Mount Everest, a tragedy described in journalist Jon Krakauer’s best-selling book Into Thin Air. Today, expedition leaders can access real-time weather and satellite data (with the assistance of technologies such as SAP HANA). That, in turn, allows them to determine more precisely how long they have before the weather turns bad, giving them enough time to move their teams to safer locations down the mountain.

Until very recently, getting past the Khumbu glacier involved playing an icy version of Russian roulette. 

For Mount Everest climbers ascending via the South Col route, one of the scariest obstacles is the Khumbu Icefall, a steep section where the Khumbu glacier drops and, in the process, breaks into massive ice chunks, some larger than a house. The Khumbu glacier moves 3 to 6 feet every day; until very recently, getting past it involved playing an icy version of Russian roulette.

That game has changed with the introduction of the Extreme Ice Survey, an innovative time-lapse photography project with cameras set up at 28 locations world wide, including one at Khumbu. The camera snaps a photo every 30 minutes during daylight hours and also uses precise geolocation indicators to determine where and how quickly the glacier is melting. Technology-savvy Everest guides now use the two-plus years of this time-lapsed imagery (about 8,000 images per year) to calculate the odds that a particular section of Khumbu will cave and determine the times that will likely be the safest to cross the icefall.

Reportedly, Mallory was once asked why he wanted to climb Mount Everest, to which he famously replied: “Because it’s there.” A century later, the climb still requires a special breed of human being who desires and is prepared to undertake a potentially hazardous journey. But in many ways, today’s technologies are helping make a safe return far less doubtful.

About SAP Startup Focus:

SAP Startup Focus works with startups in the big data, predictive analytics, and real-time analytics spaces, supporting businesses in building innovative applications using the SAP HANA database platform. More than 1,800 companies currently participate in the program. Join the conversation on Twitter by following @SAPStartups, or follow the author on Twitter: @BansalManju. 

close

Views from the Marketplace are paid for by our key partners. All Views from the Marketplace have been approved by our team.

More information »


View the original article here

Your iPhone Might Make You a Reality TV Star

Broadcasting everything your smartphone sees and hears could be the next trend in social media.

The first time I learned about live streaming was also the first time I realized you could play shuffleboard in Brooklyn, at the Royal Palms Shuffleboard Club.

As my friends and I learned the rules, my teammate, Kevin Porter, started talking to his iPhone. He’d just downloaded Yevvo, he explained. The premise was simple—everything his phone could see and hear, his followers could also see and hear. His girlfriend had stayed home that night, but she was (hypothetically) watching his every move, thanks to this app. He had become the star of his own reality television show, albeit with a very small audience, and we were all his cast mates.

Human Face Recognition Found In Neural Network Based On Monkey Brains

A neural network that simulates the way monkeys recognise faces produces many of the idiosyncratic behaviours found in humans, says computer scientists.

When neuroscientists use functional magnetic resonance imaging to see how a monkey’s brain responds to familiar faces, something odd happens. When shown a familiar face, a monkey’s brain lights up, not in a specific area, but in nine different ones.

Neuroscientists call these areas “face patches” and think they are neural networks with the specialised functions associated with face recognition. In recent years, researchers have begun to tease apart what each of these patches do. However, how they all function together is poorly understood.

Today, we get some insight into this problem thanks to the work of Amirhossein Farzmahdi at the Institute for Research on Fundamental Sciences in Tehran, Iran, and a few pals from around the world. These guys have built a number of neural networks, each with the same functions as those found in monkey brains. They’ve then joined them together to see how they work as a whole.

The result is a neural network that can recognise faces accurately. But that’s not all. The network also displays many of the idiosyncratic properties of face recognition in humans and monkeys, for example, the inability to recognise faces easily when they are upside down.

The new neural network consists of six layers with the first four trained to extract primary features. The first two recognise edges, rather like two areas of the visual cortex known as V1 and V2. The next two layers recognise face parts, such as the pattern of eyes, nose and mouth. These layers simulate the behaviour of parts of the brain called V4 and the anterior IT neurons.

The fifth the layer is trained to recognise the same face from different angles. It is known as the view selective layer and inspired by parts of monkey brains called middle face patches.

The final layer matches the face to an identity.  This is called the identity selective layer and simulates a part of the simian brain known as the anterior face patch.

Farzmahdi and co train the layers in the system using different image databases. For example, one of the datasets contain 740 face images consisting of 37 different views of 20 people. Another dataset contains images of 90 people taken from 37 different viewing angles. They also have a number of datasets for evaluating specific properties of the neural net.

Having trained the neural network, Farzmahdi and co put it through its paces. In particular, they test whether the network demonstrates known human behaviours when recognising faces.

For example, various behavioural studies have shown that humans recognise faces most easily when seen from a three quarters point of view contains, that’s halfway between a full frontal and a profile.

Curiously, Farzmahdi and co say their network behaves in the same way—the optimal viewing angle is the same three-quarter view that humans prefer.

Another curious feature of human face recognition is that it is much harder to recognise faces when they are upside down. And Farzmahdi and co’s neural network shows exactly the same property.

What’s more, it also demonstrates the “composite face effect”. This occurs when identical images of the top of a face are aligned with different bottom halves, in which case humans perceive them as being different people. Neuroscientists say this suggests that face recognition works only on the level of whole faces rather than in parts.

Farzmahdi and co say their new neural network behaves in exactly the same way. It considers composite faces as new identities, suggesting that the network must be recognising faces as a whole, just like humans.

Finally, Farzmahdi and co say that when their neural network is trained using faces of a specific race, it finds it much harder to identify faces of a different race. Once again, that is a phenomena well known in humans. “People are better at identifying faces of their own race than other races, an effect known as other race effect,” they say.

That’s interesting work because no other face recognition system has been able to reproduce these biological characteristics. The results suggest that Farzmahdi and co have found an interesting way to reproduce these human and monkey behaviours in an artificial system for the first time. “Our proposed model…explains neural response characteristics of monkey face patches; as well several behavioral phenomena observed in humans,” they say.

The process behind this work is almost as fascinating as the result. These guys have taken certain structures found in monkey brains, built synthetic system based on the structures and then found that the artificial behaviour matches the biological behaviour.

If that works for vision, then might it also work for hearing, touch, balance, movement and so on? And beyond that there is the potential for capturing the essence of being human, which must somehow be captured by structures within the brain.

Other suggestions in the comments section please.

Clearly, the fields of synthetic neuroscience and artificial intelligence are changing. And quickly.

Ref: arxiv.org/abs/1502.01241  A Specialized Face-Processing Network Consistent With The Representational Geometry Of Monkey Face Patches


View the original article here

Wednesday, February 18, 2015

The Face Detection Algorithm Set To Revolutionise Image Search

The ability to spot faces from any angle, and even when partially occluded, has always been a uniquely human capability. Not any more.

Back in 2001, two computer scientists, Paul Viola and Michael Jones, triggered a revolution in the field of computer face detection. After years of stagnation, these guys’ breakthrough was an algorithm that could spot faces in an image in real time. Indeed, the so-called Viola-Jones algorithm was so fast and simple that it was soon built into standard point and shoot cameras.

Part of their trick was to ignore the much more difficult problem of face recognition and concentrate only on detection. They also focused only on faces viewed from the front, ignoring any seen from an angle. Given these bounds, they realised that the bridge of the nose usually formed a vertical line that was brighter than the eye sockets nearby. They also noticed that the eyes were often in shadow and so formed a darker horizontal band.

So Viola and Jones built an algorithm that looks first for vertical bright bands in an image that might be noses, it then looks for horizontal dark bands that might be eyes, it then looks for other general patterns associated with faces.

Detected by themselves, none of these features are strongly suggestive of a face. But when they are detected one after the other in a cascade, the result is a good indication of a face in the image. Hence the name of this process: a detector cascade. And since these tests are all simple to run, the resulting algorithm can work quickly in real-time.

But while the Viola-Jones algorithm was something of a revelation for faces seen from the front, it cannot accurately spot faces from any other angle. And that severely limits how it can be used for face search engines.

Which is why Yahoo is interested in this problem. Today, Sachin Farfade and Mohammad Saberian at Yahoo Labs in California and Li-Jia Li at Stanford University nearby, reveal a new approach to the problem that can spot faces at an angle, even when partially occluded. They say their new approach is simpler than others and yet achieves state-of-the-art performance.

Farfade and co use a fundamentally different approach to build their model.  These guys capitalise on the advances made in recent years on a type of machine learning known as a deep convolutional neural network. The idea is to train a many-layered neural network using a vast database of annotated examples, in this case pictures of faces from many angles.  

To that end, Farfade and co created a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They then trained their neural net in batches of 128 images over 50,000 iterations.

The result is a single algorithm that can spot faces from a wide range of angles, even when partially occluded. And it can spot many faces in the same image with remarkable accuracy.

The team call this approach the Deep Dense Face Detector and say it compares well with other algorithms.  “We evaluated the proposed method with other deep learning based methods and showed that our method results in faster and more accurate results,” they say.

What’s more, their algorithm is significantly better at spotting faces when upside down, something other approaches haven’t perfected. And they say that it can be made even better with datasets that include more upside down faces. “In future we are planning to use better sampling strategies and more sophisticated data augmentation techniques to further improve performance of the proposed method for detecting occluded and rotated faces.”

That’s interesting work that shows how fast face detection is progressing. The deep convolutional neural network technique is only a couple of years old itself and already it has led to major advances in object and face recognition.  

The great promise of this kind of algorithm is in image search. At the moment, it is straightforward to hunt for images taken at a specific place or at a certain time. But it is hard to find images taken of specific people. This is step in that direction. It is inevitable that this capability will be with us in the not too distant future.

And when it arrives, the world will become a much smaller place. It’s not just future pictures that will become searchable but the entire history of digitised images including vast stores of video and CCTV footage. That’s going to be a powerful force, one way or another.

Ref: arxiv.org/abs/1502.02766  Multi-view Face Detection Using Deep Convolutional Neural Networks


View the original article here

A Film Studio for the Age of Virtual Reality

A Montreal-based film studio is making movies that you’ll watch with a virtual-reality headset, pointing the way to a whole new form of entertainment.

A still from Wild—The Experience, a short virtual-reality film.

Imagine sitting back in a chair, sliding a headset over your eyes and headphones over your ears. Suddenly, you’re sitting on a rock in a sun-dappled clearing, surrounded by tall trees, alone with the noises of the forest. Alone, that is, until you turn your head and spot Reese Witherspoon walking toward you, looking like a haggard camper with a giant pack on her back.

This is what it’s like to watch the opening bit of Wild—The Experience, a short virtual-reality film made as a promotion for the Witherspoon-led movie Wild, which is based on Cheryl Strayed’s book about her trek along the Pacific Crest Trail. Despite the clunky feeling of a headset on your face, for a few moments you feel transported to someone else’s reality. You sense the calming stillness of nature and see it all around you—a contrast with the weirdness of watching Witherspoon stopping to rest on your left without acknowledging your presence.

This is just one immersive experience that Félix Lajeunesse and Paul Raphaël are creating at Felix & Paul Studios, their Montreal-based film production company that focuses on live-action 3-D and virtual-reality films. Their studio and a few others are exploring ways to take virtual reality beyond video games. “We like to think of virtual reality not as a medium to actually create horror stories and heavy adrenaline-driven emotions, but rather to use it as a way to enhance the human experience,” Lajeunesse says.

The world of virtual-reality films is still small—it’s not much more than a collection of experiments, and to check any of them out you’ll need a headset of some sort. But the continued development of headsets such as Oculus Rift, the Samsung-Oculus Gear VR, and Sony’s Project Morpheus signal that immersive display technologies may finally be about to go mainstream.

Seven Must-Read Stories (Week Ending February 14, 2015)

Sorry, I could not read the content fromt this page.

View the original article here

Can Twitter Fix Its Harassment Problem without Losing Its Soul?

Harassment has become a major issue online. Twitter’s efforts to crack down on problem users might suggest a broader solution.

At least Twitter admits it has a problem. In an internal memo leaked last week, CEO Dick Costolo acknowledged what many people on Twitter already knew: 140 characters at a time, many of the service’s users are routinely harassed, abused, or threatened, and the company isn’t doing much to stop it.

Costolo’s note suggested that Twitter would take new action against harassers—a potentially important step at a time when online abuse has reached troubling proportions. Twitter’s effort might offer a template for addressing the wider problem, but it may also show the challenge of stamping out unacceptable behavior without eroding the character of an inherently unruly and combative community. Rules that reduce harassment might have the unintended consequences of slowing the flow of information and turning off some ardent users.