Sunday, April 5, 2015

Why Venture Capitalists Love Security Firms Right Now

We're experimenting with new ways for you to comment and discuss stories that you read.

You can now read and write comments directly on sentences, paragraphs, images and quotes.

Highlight text and click the icon or click the icon at the end of each paragraph.


View the original article here

Virtual Reality Advertisements Get in Your Face

Some companies see virtual and augmented reality as a way to make money from a new type of ads.

Recommended from Around the Web (Week Ending March 21, 2015)

Sorry, I could not read the content fromt this page.

View the original article here

Saturday, April 4, 2015

Reality Check: Comparing HoloLens and Magic Leap

After trying demos of Magic Leap and HoloLens, it’s clear that commercializing augmented reality technology will be difficult.

A mockup shows HoloLens being used to provide remote help with home repairs.

I’ve seen two competing visions for a future in which virtual objects are merged seamlessly with the real world. Both were impressive in part, but they also made me wonder whether augmented reality will become a successful commercial reality anytime soon.

I’m the only person I know of to have tried both Microsoft’s HoloLens and the system being developed by a secretive startup called Magic Leap.

I got a peek at what Magic Leap is building back in December (see “10 Breakthrough Technologies 2015: Magic Leap”). In that demonstration, 3-D monsters and robots looked amazingly detailed and crisp, fitting in well with the surrounding world, though they were visible only with lenses attached to bulky hardware sitting on a cart, and no release date has yet been revealed.

I had my chance to see HoloLens during a recent visit to Microsoft’s headquarters in Redmond, Washington. HoloLens is a holographic system that the company plans to pack into a visor about the size of a pair of bulbous ski goggles. In January, Microsoft said HoloLens would be available “in the Windows 10 time frame,” and the company said this week that the new operating system will be released this summer.

I experienced three HoloLens demos. The first, HoloStudio, showed the possibilities for 3-D modeling and manipulation. The second let me explore the surface of Mars with a virtually present NASA scientist. And the third gave a sense of how I might use HoloLens in combination with a Skype video chat to get help with a real-world problem (in this case, installing a light switch).

Unlike with some stereoscopic virtual-reality 3-D technologies I’ve tried, such as Oculus Rift, HoloLens did not make me feel nauseous, which bodes well. Microsoft would not say how it works, but a bit of explanation in a Wired piece makes it sound as if it may be doing something somewhat similar to Magic Leap, which is using a tiny projector to shine light at your eyes that blends in very well with the light you get from the real world around you.

The final form Microsoft’s HoloLens will take.

But I was not blown away by what I saw in Redmond. The holograms looked great in a couple of instances, such as when I peered at the underside of a rock on a reconstruction of the surface of Mars, created with data from the Curiosity rover. More often, though, images appeared distractingly transparent and not nearly as crisp as the creatures Magic Leap showed me some months before. What’s more, the relatively narrow viewing area in front of my face meant the 3-D imagery seen through HoloLens was often interrupted by glimpses of the unenhanced world on the periphery. The headset also wasn’t closed off to the world around me, so I still had my natural peripheral vision of the unenhanced room. This was okay when looking at smaller or farther-away 3-D images, like an underwater scene I was shown during my first demo, or while moving around to inspect images close-up from different angles. The illusion got screwed up, though, when it came to looking at something larger than my field of view.

Microsoft is also still working on packing everything into the HoloLens form it has promised. Unlike the untethered headset that the company demonstrated in January, the device I tried was unwieldy and unfinished: it had see-through lenses attached to a heavy mass of electronics and plastic straps, tethered to a softly whirring rectangular box (Microsoft’s holographic processing unit) that I had to wear around my neck and to a nearby computer. I was instructed to touch only a plastic strap that fit over the top of my head; demo minders placed it on me and took it off at the end of each experience.

Even this level of limited mobility was more than I got at Magic Leap, but it’s clear the HoloLens team has a big task in getting the technology to fit into its smaller, consumer-ready design.

For instance, during the Mars demo, the room around me was blanketed with realistic images of the surface of the planet, and a detailed-looking rover sat in front of me, slightly to my right. But I could only see it one rectangle at a time; if my eyes strayed beyond that rectangle in front of me, I’d see bits of the room, but no hologram.

The issues extended to the opacity of the images, too. The demos were all held in rooms that had no windows, but lighting was kept at a normal level of brightness, and the rooms were decorated with furniture, knick-knacks, and other items on and near the walls—not unlike your average living room, and the kind of environment in which you’d be likely to use a HoloLens if you bought one for yourself. Yet I could often see bits of the room peeking through the images themselves in a way that interrupted, rather than worked with, the illusion.

The most impressive part of the HoloLens demos was the use of sensors to track where I was looking and gesturing, as well as what I was saying. My gaze was effectively a mouse, accurately highlighting what I was looking at. An up-and-down motion with my index finger—dubbed an “air tap” by the HoloLens crew—functioned as the mouse click to do things like paint a fish or place a flag in a certain spot on Mars. (I screwed this up a number of times; mostly because I wasn’t holding my finger up high enough.) Simple voice commands like “copy” and “rotate” worked well, too.

Microsoft’s Wristband Would Like to Be Your Life Coach

Microsoft is working to combine biometric data collected by its new wristband with information from your calendar and contacts to make smarter observations.

Facebook AI Software Learns and Answers Questions

Software able to read a synopsis of Lord of the Rings and answer questions about it could beef up Facebook search.

Facebook is working on artificial intelligence software that can process text and then answer questions about it. The effort could eventually lead to anything from better search on Facebook itself to more accurate and useful personal assistant software.

The social network’s chief technology officer, Mike Schroepfer, introduced the software, called Memory Network, in a talk at Facebook’s F8 developer conference in San Francisco on Thursday. He demonstrated how the software could acquire knowledge from text by showing how it was fed a super-simple synopsis of the book “Lord of the Rings”, in the form of phrases including “Bilbo travelled to the cave” and “Gollum dropped the ring there.” After that, the software could answer questions that required following the flow of events in the text, such as “Where is the ring?” and “Where is Frodo now?”

Extracting information from text and figuring out how to put it together into brand-new facts is a difficult task for computers to do–as Shroepfer noted in his demo, it requires the machine to understand the relationships between objects over time.

Facebook is making this work with a new twist on a recently-popular approach to machine learning called deep learning (see “10 Breakthrough Technologies 2014: Deep Learning”). That technique involves using networks of crude “neurons” to process data. Facebook added what Schroepfer described as a “multimillion-slot memory system” to such a network, which functions essentially as a short-term memory where facts can be stored and processed.

Facebook set up a research group dedicated to deep learning in 2013 (see “Facebook Launches Advanced AI Effort”). Like similar groups at Google and elsewhere, it has largely focused on using the technique to make software able to figure out what’s going on in images. In his talk, Schroepfer also showed results from a project in which his researchers taught deep learning software to classify 487 different sports from looking at video clips. He said it is good enough to differentiate between figure skating, speed skating, artistic roller skating, and ice hockey.

That such an apparently easy task is a major achievement for software is a reminder that even deep learning software is far from very intelligent. Such software could be valuable, though. If applied to the many videos uploaded to Facebook, it might make it possible to, say, show you ads that are closely targeted to what you’re watching.


View the original article here

High-Resolution 3-D Scans Built from Drone Photos

A drone spent hours swarming around Rio’s iconic Christ statue to show a cheap way to capture highly accurate 3-D scans.

Seven Must-Read Stories (Week Ending March 28, 2015)

Sorry, I could not read the content fromt this page.

View the original article here

Our Fear of Artificial Intelligence

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Thing reviewed

“Superintelligence: Paths, Dangers, Strategies”
By Nick Bostrom
Oxford University Press, 2014

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

Volition

The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in WarGames. The androids of 1973’s Westworld went crazy and started killing.

Extreme AI predictions are “comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,” Rodney Brooks writes.

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.

You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.”

No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apple’s Siri to Google’s driverless cars, also reveal the technology’s severe limitations; both can be thrown off by situations that they haven’t encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.

This is where skeptics such as Brooks, a founder of iRobot and Rethink Robotics, come in. Even if it’s impressive—relative to what earlier computers could manage—for a computer to recognize a picture of a cat, the machine has no volition, no sense of what cat-ness is or what else is happening in the picture, and none of the countless other insights that humans have. In this view, AI could possibly lead to intelligent machines, but it would take much more work than people like Bostrom imagine. And even if it could happen, intelligence will not necessarily lead to sentience. Extrapolating from the state of AI today to suggest that superintelligence is looming is “comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner,” Brooks wrote recently on Edge.org. “Malevolent AI” is nothing to worry about, he says, for a few hundred years at least.

Insurance policy

Even if the odds of a superintelligence arising are very long, perhaps it’s irresponsible to take the chance. One person who shares Bostrom’s concerns is Stuart J. Russell, a professor of computer science at the University of California, Berkeley. Russell is the author, with Peter Norvig (a peer of Kurzweil’s at Google), of Artificial Intelligence: A Modern Approach, which has been the standard AI textbook for two decades.

“There are a lot of supposedly smart public intellectuals who just haven’t a clue,” Russell told me. He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore’s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.

Bostrom’s book proposes ways to align computers with human needs. We’re basically telling a god how we’d like to be treated.

Because Google, Facebook, and other companies are actively looking to create an intelligent, “learning” machine, he reasons, “I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just seems a bit daft.” Russell made an analogy: “It’s like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy you’d better contain the fusion reaction.” Similarly, he says, if you want unlimited intelligence, you’d better figure out how to align computers with human needs.

Bostrom’s book is a research proposal for doing so. A superintelligence would be godlike, but would it be animated by wrath or by love? It’s up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. We’re basically telling a god how we’d like to be treated. How to proceed?

Bostrom draws heavily on an idea from a thinker named Eliezer Yudkowsky, who talks about “coherent extrapolated volition”—the consensus-derived “best self” of all people. AI would, we hope, wish to give us rich, happy, fulfilling lives: fix our sore backs and show us how to get to Mars. And since humans will never fully agree on anything, we’ll sometimes need it to decide for us—to make the best decisions for humanity as a whole. How, then, do we program those values into our (potential) superintelligences? What sort of mathematics can define them? These are the problems, Bostrom believes, that researchers should be solving now. Bostrom says it is “the essential task of our age.”

For the civilian, there’s no reason to lose sleep over scary robots. We have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them.

This somewhat more nuanced suggestion—without any claims of a looming AI-mageddon—is the basis of an open letter on the website of the Future of Life Institute, the group that got Musk’s donation. Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI “while avoiding potential pitfalls.” This letter is signed not just by AI outsiders such as Hawking, Musk, and Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher). You can see where they’re coming from. After all, if they develop an artificial intelligence that doesn’t share the best human values, it will mean they weren’t smart enough to control their own creations.

Paul Ford, a freelance writer in New York, wrote about Bitcoin in March/April 2014.


View the original article here

Friday, April 3, 2015

Ripple, a Cryptocurrency Company, Wants to Rewire Bank Authentication

A digital-currency company thinks it can protect the personal information used to perform identity checks in the financial industry.

Companies built around Bitcoin and other digital currencies mostly focus on storing and transferring money. But at least one company is trying to prove that some of the underlying technology can have a much wider impact on the financial industry.

That startup, Ripple Labs, has already had some success persuading banks to use its Bitcoin-inspired protocol to speed up money transfers made in any currency, especially across borders (see “50 Smartest Companies 2014: Ripple Labs”). Now it is building a system that uses some similar cryptographic tricks to improve the way financial companies check the identity of their customers. The system could also provide a more secure way to log in to other online services.

Toolkits for the Mind

When the Japanese computer scientist Yukihiro Matsumoto decided to create Ruby, a programming language that has helped build Twitter, Hulu, and much of the modern Web, he was chasing an idea from a 1966 science fiction novel called Babel-17 by Samuel R. Delany. At the book’s heart is an invented language of the same name that upgrades the minds of all those who speak it. “Babel-17 is such an exact analytical language, it almost assures you technical mastery of any situation you look at,” the protagonist says at one point. With Ruby, Matsumoto wanted the same thing: to reprogram and improve the way programmers think.

It sounds grandiose, but Matsumoto’s isn’t a fringe view. Software developers as a species tend to be convinced that programming languages have a grip on the mind strong enough to change the way you approach problems—even to change which problems you think to solve. It’s how they size up companies, products, their peers: “What language do you use?”

That can help outsiders understand the software companies that have become so powerful and valuable, and the products and services that infuse our lives. A decision that seems like the most inside kind of inside baseball—whether someone builds a new thing using, say, Ruby or PHP or C—can suddenly affect us all. If you want to know why Facebook looks and works the way it does and what kinds of things it can do for and to us next, you need to know something about PHP, the programming language Mark Zuckerberg built it with.

Among programmers, PHP is perhaps the least respected of all programming languages. A now canonical blog post on its flaws described it as “a fractal of bad design,” and those who willingly use it are seen as amateurs. “There’s this myth of the brilliant engineering that went into Facebook,” says Jeff Atwood, co-creator of the popular programming question–and-answer site Stack Overflow. “But they were building PHP code in Windows XP. They were hackers in almost the derogatory sense of the word.” In the space of 10 minutes, Atwood called PHP “a shambling monster,” “a pandemic,” and a haunted house whose residents have come to love the ghosts.

Things reviewed

Babel-17 By Samuel R. Delany
1966
Real World OCaml By Yaron Minsky et al.
O’Reilly, 2013 PHP Hack Scala

Most successful programming languages have an overall philosophy or set of guiding principles that organize their vocabulary and grammar—the set of possible instructions they make available to the programmer—into a logical whole. PHP doesn’t. Its creator, Rasmus Lerdorf, freely admits he just cobbled it together. “I don’t know how to stop it,” he said in a 2003 interview. “I have absolutely no idea how to write a programming language—I just kept adding the next logical step along the way.”

Programmers’ favorite example is a PHP function called “mysql_escape_string,” which rids a query of malicious input before sending it off to a database. (For an example of a malicious input, think of a form on a website that asks for your e-mail address; a hacker can enter code in that slot to force the site to cough up passwords.) When a bug was discovered in the function, a new version was added, called “mysql_real_escape_string,” but the original was not replaced. The result is a bit like having two similar-looking buttons right next to each other in an airline cockpit: one that puts the landing gear down and one that puts it down safely. It’s not just an affront to common sense—it’s a recipe for disaster.

Yet despite the widespread contempt for PHP, much of the Web was built on its back. PHP powers 39 percent of all domains, by one estimate. Facebook, Wikipedia, and the leading publishing platform WordPress are all PHP projects. That’s because PHP, for all its flaws, is perfect for getting started. The name originally stood for “personal home page.” It made it easy to add dynamic content like the date or a user’s name to static HTML pages. PHP allowed the leap from tinkering with a website to writing a Web application to be so small as to be imperceptible. You didn’t need to be a pro.

PHP’s get-going-ness was crucial to the success of Wikipedia, says Ori Livneh, a principal software engineer at the Wikimedia Foundation, which operates the project. “I’ve always loathed PHP,” he tells me. The project suffers from large-scale design flaws as a result of its reliance on the language. (They are partly why the foundation didn’t make Wikipedia pages available in a version adapted for mobile devices until 2008, and why the site didn’t get a user-friendly editing interface until 2013.) But PHP allowed people who weren’t—or were barely—software engineers to contribute new features. It’s how Wikipedia entries came to display hieroglyphics on Egyptology pages, for instance, and handle sheet music.

The programming language PHP ­created and sustains Facebook’s move-fast, hacker-oriented corporate culture.

You wouldn’t have built Google in PHP, because Google, to become Google, needed to do exactly one thing very well—it needed search to be spare and fast and meticulously well engineered. It was made with more refined and powerful languages, such as Java and C++. Facebook, by contrast, is a bazaar of small experiments, a smorgasbord of buttons, feeds, and gizmos trying to capture your attention. PHP is made for making—for cooking up features quickly.

You can almost imagine Zuckerberg in his Harvard dorm room on the fateful day that Facebook was born, doing the least he could to get his site online. The Web moves so fast, and users are so fickle, that the only way you’ll ever be able to capture the moment is by being first. It didn’t matter if he made a big ball of mud, or a plate of spaghetti, or a horrible hose cabinet (to borrow from programmers’ rich lexicon for describing messy code). He got the thing done. People could use it. He wasn’t thinking about beautiful code; he was thinking about his friends logging in to “Thefacebook” to look at pictures of girls they knew.

Today Facebook is worth more than $200 billion and there are signs all over the walls at its offices: “Done is better than perfect”; “Move fast and break things.” These bold messages are supposed to keep employees in tune with the company’s “hacker” culture. But these are precisely PHP’s values. Moving fast and breaking things is in fact so much the essence of PHP that anyone who “speaks” the language indelibly thinks that way. You might say that the language itself created and sustains Facebook’s culture.

The secret weapon

If you wanted to find the exact opposite of PHP, a kind of natural experiment to show you what the other extreme looked like, you couldn’t do much better than the self-serious Lower Manhattan headquarters of the financial trading firm Jane Street Capital. The 400-person company claims to be responsible for roughly 2 percent of daily equity trading volume in the United States.

When I meet Yaron Minsky, Jane Street’s head of technology, he’s sitting at a desk with a working Enigma machine beside him, one of only a few dozen of the World War II code devices left in the world. I would think it the clear winner of the contest for Coolest Secret Weapon in the Room if it weren’t for the way he keeps talking about an obscure programming language called OCaml. Minsky, a computer science PhD, convinced his employer 10 years ago to rewrite the company’s entire trading system in OCaml. Before that, almost nobody used the language for actual work; it was developed at a French research institute by academics trying to improve a computer system that automatically proves mathematical theorems. But Minsky thought OCaml, which he had gotten to know in grad school, could replace the complex Excel spreadsheets that powered Jane Street’s trading systems.

OCaml’s big selling point is its “type system,” which is something like Microsoft Word’s grammar checker, except that instead of just putting a squiggly green line underneath code it thinks is wrong, it won’t let you run it. Programs written with a type system tend to be far more reliable than those written without one—useful when a program might trade $30 billion on a big day.

Minsky says that by catching bugs, OCaml’s type system allows Jane Street’s coders to focus on loftier problems. One wonders if they have internalized the system’s constant nagging over time, so that OCaml has become a kind of Newspeak that makes it impossible to think bad thoughts.

The catch is that for the type checker to do its job, the programmers have to add complex annotations to their code. It’s as if Word’s grammar checker required you to diagram all your sentences. Writing code with type constraints can be a nuisance, even demoralizing. To make it worse, OCaml, more than most other programming languages, traffics in a kind of deep abstract math far beyond most coders. The language’s rigor is like catnip to some people, though, giving Jane Street an unusual advantage in the tight hiring market for programmers. Software developers mostly join Facebook and Wikipedia in spite of PHP. Minsky says that OCaml—along with his book Real World OCaml—helps lure a steady supply of high-quality candidates. The attraction isn’t just the language but the kind of people who use it. Jane Street is a company where they play four-person chess in the break room. The culture of competitive intelligence and the use of a fancy programming language seem to go hand in hand.

Google appears to be trying to pull off a similar trick with Go, a high–performance programming language it developed. Intended to make the workings of the Web more elegant and efficient, it’s good for developing the kind of high-stakes software needed to run the collections of servers behind large Web services. It also acts as something like a dog whistle to coders interested in the new and the difficult.

Growing up

In late 2010, Facebook was having a crisis. PHP was not built for performance, but it was being asked to perform. The site was growing so fast it seemed that if something didn’t change fairly drastically, it would start falling over.

Switching languages altogether wasn’t an option. Facebook had millions of lines of PHP code, thousands of engineers expert in writing it, and more than half a billion users. Instead, a small team of senior engineers was assigned to a special project to invent a way for Facebook to keep functioning without giving up on its hacky mother tongue.

One part of the solution was to create a piece of software—a compiler—that would translate Facebook’s PHP code into much faster C++ code. The other was a feat of computer linguistic engineering that let Facebook’s programmers keep their PHP-ian culture but write more reliable code.

Startups can cleverly use the power of programming languages to manipulate their organizational psychology.

The rescue squad did it by inventing a dialect of PHP called Hack. Hack is PHP with an optional type system; that is, you can write plain old quick and dirty PHP—or, if you so choose, you can tie yourself to the mast, adding annotations to let the type system check the correctness of your code. That this type checker is written entirely in OCaml is no coincidence. Facebook wanted its coders to keep moving fast in the comfort of their native tongue, but it didn’t want them to have to break things as they did it. (Last year Zuckerberg announced a new engineering slogan: “Move fast with stable infra,” using the hacker shorthand for the infrastructure that keeps the site running.)

Around the same time, Twitter underwent a similar transformation. The service was originally built with Ruby on Rails—a popular Web programming framework created using Matsumoto’s Ruby and inspired in large part by PHP. Then came the deluge of users. When someone with hundreds of thousands of followers tweeted, hundreds of thousands of other people’s timelines had to be immediately updated. Big tweets like that would frequently overwhelm the system and force engineers to take the site down to allow it to catch up. They did it so often that the “fail whale” on the company’s maintenance page became famous in its own right. Twitter stopped the bleeding by replacing large pieces of the service’s plumbing with a language called Scala. It should not be surprising that Scala, like OCaml, was developed by academics, has a powerful type system, and prizes correctness and performance even at the expense of the individual programmers’ freedom and delight in their craft.

Probing the Whole Internet for Weak Spots

Rapidly scanning the Internet has become vital to efforts to keep it secure.

When a major flaw in the encryption that secures websites was revealed this March, Zakir Durumeric, a research fellow at the University of Michigan, was the first person to know how serious it was. By performing a scan of every device on the Internet, he realized its full potential even before the researchers who had first identified the flaw, known as FREAK.

“There were questions as to the correct way to respond before we did the scan,” says Durumeric.

The Emerging Science of Human-Data Interaction

The rapidly evolving ecosystems associated with personal data is creating an entirely new field of scientific study, say computer scientists. And this requires a much more powerful ethics-based infrastructure.


Back in 2013, the UK supermarket giant, Tesco, announced that it was installing face recognition software in 450 of its stores that would identify customers as male or female, guess their age and measure how long they looked at an ad displayed on a screen below the camera. Tesco would then give the data to advertisers to show them how well their advertising worked and allow them to target their ads more carefully.

Many commentators pointed out the similarity between this system and the sci-fi film Minority Report in which people are bombarded by personalised ads which detect who they are and where they are looking.

It also raised important questions about data collection and privacy. How would customers understand the potential uses of this kind of data, how would they agree to these uses and how could they control the data after it was collected?

Now Richard Mortier at the University of Nottingham in the UK and a few pals say the increasingly complex, invasive and opaque use of data should be a call to arms to change the way we study data, interact with it and control its use. Today, they publish a manifesto describing how a new science of human-data interaction is emerging from this “data ecosystem” and say that it combines disciplines such as computer science, statistics, sociology, psychology and behavioural economics.

They start by pointing out that the long-standing discipline of human-computer interaction research has always focused on computers as devices to be interacted with. But our interaction with the cyber world has become more sophisticated as computing power has become ubiquitous, a phenomenon driven by the Internet but also through mobile devices such as smartphones. Consequently, humans are constantly producing and revealing data in all kinds of different ways.

Mortier and co say there is an important distinction between data that is consciously created and released such as a Facebook profile; observed data such as online shopping behaviour; and inferred data that is created by other organisations about us, such as preferences based on friends’ preferences.

This leads the team to identify three key themes associated with human-data interaction that they believe the communities involved with data should focus on.

The first of these is concerned with making data, and the analytics associated with it, both transparent and comprehensible to ordinary people. Mortier and co describe this as the legibility of data and say that the goal is to ensure that people are clearly aware of the data they are providing, the methods used to draw inferences about it and the implications of this.

Making people aware of the data being collected is straightforward but understanding the implications of this data collection process and the processing that follows is much harder. In particular, this could be in conflict with the intellectual property rights of the companies that do the analytics.

An even more significant factor is that the implications of this processing are not always clear at the time the data is collected. A good example is the way the New York Times tracked down an individual after her seemingly anonymized searches were published by AOL. It is hard to imagine that this individual had any idea that the searches she was making would later allow her identification.

The second theme is concerned with giving people the ability to control and interact with the data relating to them. Mortier and co describe this as “agency”. People must be allowed to opt in or opt out of data collection programs and to correct data if it turns out to be wrong or outdated and so on. That will require simple-to-use data access mechanisms that have yet to be developed

The final theme builds on this to allow people to change their data preferences in future, an idea the team call “negotiability”. Something like this is already coming into force in the European Union where the Court of Justice has recently begun to enforce the “right to be forgotten”, which allows people to remove information from search results under certain circumstances.

This is a tricky area but Mortier and co point out that the balance of power in the data ecosystem is weighted towards the collectors and aggregators rather than to private individuals and this needs to be redressed.

The overall impression from this manifesto is that our data-driven society is evolving rapidly, particularly with the growing focus on big data. An important factor in all this is the role of governments and, in particular, the revelations about data collection by government bodies such as the NSA in the US, GCHQ in the UK and even health providers such as the UK’s National Health Service.

“We believe that technology designers must take on the challenge of building ethical systems,” conclude Mortier and co.

That’s something Tesco and other data collectors would do well to bear in mind. But while this is clearly a worthy goal and one that there should be general and widespread support for, the devil will be in the detail. When it comes to building consensus, the words “herding” and “cats” come to mind.

Worth pursuing nevertheless.

Ref: http://arxiv.org/abs/1412.6159  Human-Data Interaction: The Human Face of the Data-Driven Society


View the original article here

Broadcast Every Little Drama

Meerkat and Periscope show how simple, fun, and weird live-streaming can be.

Facebook Lets Developers Build on Its Chat App

Facebook hopes that adding functionality like video sharing and shopping to Messenger will help it grow even as competition rises.

Facebook is responding to the growing popularity of mobile messaging apps by giving its own messaging app new capabilities. The company will let developers make their apps work within Facebook Messenger, and is also making it possible for shoppers to chat with businesses using the app.

During a presentation at the social network’s F8 developer conference in San Francisco on Wednesday, Facebook founder and CEO Mark Zuckerberg said Facebook wants to make it easier for Messenger’s 600 million users to share more content—animated GIFs, videos, or animated greeting cards, for example—through Messenger itself, rather than by leaving the chat app. To do that, Facebook is rolling out Messenger Platform, which developers can use to make Messenger apps for sharing various kinds of media.

Thursday, April 2, 2015

Amazon Robot Contest May Accelerate Warehouse Automation

Robots will use the latest computer-vision and machine-learning algorithms to try to perform the work done by humans in vast fulfillment centers.

Seven Must-Read Stories (Week Ending March 21, 2015)

Sorry, I could not read the content fromt this page.

View the original article here

Recommended from Around the Web (Week Ending March 28, 2015)

A roundup of the most interesting stories from other sites, collected by the staff at MIT Technology Review.

The Taming of Tech Criticism
Evgeny Morozov points out the essential conservatism at the heart of most technology criticism.
—Brian Bergstein, deputy editor

Google Makes Most of Close Ties to White House
What is Google so afraid of that it spends so much time lobbying Washington?
—Brian Bergstein

A New Series of Water-Activated Illustrations and Games on Seattle Sidewalks Only Appear When It Rains
Citizens of the raintropolis that is Seattle are enjoying these technology-driven works of public art, which can only be seen when they’re wet.
—Kyanna Sutton, senior Web producer

Putting a Virtual Nose on Video Games Could Reduce Simulator Sickness
Researchers find that adding a virtual nose to your view from inside a virtual reality headset reduces motion sickness.
—Tom Simonite, San Francisco bureau chief

Rethinking the Brain
Criticism of billion-euro Human Brain Project was justified, says report.
—Antonio Regalado, senior editor, biomedicine

The War Over Who Steve Jobs Was
Interesting look at the conflict behind different biographical takes on Steve Jobs.
—Rachel Metz, senior editor, mobile

Drones Beaming Web Access Are in the Stars for Facebook
Is Facebook really just still trying to connect people? Skeptics beware.
—J. Juniper Friedman, associate Web producer

Mapping the Sneakernet
The idea of Internet access usually ignores the informal flow of information.
—Will Knight, news and analysis editor


View the original article here

Adding Greater Realism to Virtual Worlds

A startup is borrowing techniques used in high-frequency trading to enable more realistic simulated worlds.

screenshot from Worlds Adrift

Concept art for the computer game Worlds Adrift, which is being developed using Improbable’s technology.

What new possibilities might open up in video game design—and beyond—when an unlimited number of people can inhabit a truly realistic virtual world simultaneously? This is just one of several questions that Improbable, a company that’s developing a new environment for building virtual worlds of unprecedented scale and complexity, hopes to answer.

The technology could also be used to create real-world simulations that reveal, for instance, the effect that closing a major railway station would have during a disease epidemic, or how a radical change in a government’s housing policy might affect a country’s infrastructure.

Improbable has developed techniques that make it possible to share large amounts of information between multiple servers nearly instantaneously. This will allow many more players to experience a virtual world together than is currently possible. It will also allow more realistic physical interactions to take place within those worlds. Currently, in even the most elaborate virtual worlds, some characters and objects cannot interact due to the computational power this would require.

Virtual worlds will, according to Improbable’s CEO and cofounder, Herman Narula, no longer feel like they’re built of “cardboard.” Moreover, using Improbable’s technology, objects and entities will be able to remain in the virtual world persistently, even when there are no human players around (currently, most virtual worlds essentially freeze when unoccupied). And actions taken in one corner of a game could have implications later or in another place.

Virtual worlds are already often expansive. The procedurally generated new game No Man’s Sky, for example, presents a virtual galaxy that is too large for any human to fully explore within his or her lifetime (see “No Man’s Sky: A Vast Game Created by Algorithms”). But even if we are awed by the sprawl of their geography, the complexity of such worlds is lacking due to hardware and software limitations.

Tuesday, March 31, 2015

PC Gaming Week: Why Toki made platform fans go ape

This is why you should consider buying the 64GB variant of Samsung Galaxy S6 document.write(calculate_time('29 Mar 2015, 18:16'));Samsung has recently showcased us the Galaxy S6 and S6 Edge in India. Both these devices are available in 32GB / 64GB / 128GB capacity and already on pre-order in India.
How Meerkat is competing with competitor Periscope document.write(calculate_time('29 Mar 2015, 17:59'));Meerkat is reportedly getting ready to make some changes to its app in order to better compete with Periscope, the live streaming app that Twitter just scooped up earlier this month.
HTC One E9+ unveiled with confusing spec line up document.write(calculate_time('29 Mar 2015, 17:43'));HTC has let details of the One E9+ slip on its Chinese website, but something doesn't quite add up.
Android One: The story so far and where is it headed document.write(calculate_time('29 Mar 2015, 17:34'));Back at the Google I/O developer conference in June 2014, Google officially unveiled the Android One project, a program in which Google would work hand-in-hand with low-cost smartphone manufacturers in emerging markets.
Build it: The best gaming PC under £500 document.write(calculate_time('29 Mar 2015, 17:30'));We answer the age-old question of whether you should build or buy your gaming rig.
Manic Street Platformers: the games that had 90s bands hooked document.write(calculate_time('29 Mar 2015, 16:00'));From Richey Edwards's love of John Madden's American Football to Massive Attack's Kick Off 2 addition.
Hey BT, improve your coverage first, then we'll talk about perfect 4G document.write(calculate_time('29 Mar 2015, 15:30'));Cover the country properly before boasting about tiny pockets of lab-perfect 4G.
Facebook means business with Messenger Platform, VR and drones document.write(calculate_time('29 Mar 2015, 12:50'));Whilst VR inevitably dominated the company's F8 developer conference, Messenger Platform was a big move on the business front.
Why Samsung Galaxy S6 Edge is way more than a concept document.write(calculate_time('29 Mar 2015, 05:11'));World's first dual-curved glass display is almost here, and it's an idea that is here to stay.
The greatest Star Wars games of all time document.write(calculate_time('28 Mar 2015, 20:30'));We venture to a galaxy far, far away to highlight the greatest Star Wars games to ever hit the PC.
How virtual reality could revolutionise PC gaming document.write(calculate_time('28 Mar 2015, 19:30'));We look at what pieces of virtual reality tech are likely to change PC gaming forever, and the problems some genres present to VR.
A race into the night showed me the future of running document.write(calculate_time('28 Mar 2015, 18:30'));And why Apple might have made a slghtly better running watch than we all thought
Nvidia G-Sync vs AMD FreeSync document.write(calculate_time('28 Mar 2015, 18:30'));Are there really any differences between the two technologies that aim to smooth your PC games?
Why Toki made platform fans go ape document.write(calculate_time('28 Mar 2015, 18:00'));Remembering Ocean's surreal platformer through Amiga Power's 1991 review.
Best Amazon Prime Instant Video TV shows: 25 essential Amazon Prime TV series document.write(calculate_time('28 Mar 2015, 16:31'));These are the television shows you need to be watching on Amazon Prime right now.
A design for life: how game designers are shaping entire worlds document.write(calculate_time('28 Mar 2015, 16:08'));Talking the future of game design with some of the most important names in the business.
Missing: Hideo Kojima. Last seen eating sausages and making Metal Gear Solid 5 document.write(calculate_time('28 Mar 2015, 15:30'));And why Bloodborne has made me a loner.
Can a Mac be a gaming PC? How the world is changing for Mac gamers document.write(calculate_time('28 Mar 2015, 15:30'));For years, Mac gaming has been almost an oxymoron - not really worth considering if your love of games extends beyond Football Manager. But things are changing.
Why Star Citizen could be the best space game of all time document.write(calculate_time('28 Mar 2015, 15:20'));Star Citizen is a game of almost unfathomable scope, from the creator of the classic Wing Commander, Chris Roberts.
Seagate Seven Portable Drive document.write(calculate_time('28 Mar 2015, 14:36'));If James Bond had an external hard drive in his Aston Martin, it would be the Seagate Seven.Load more

View the original article here

Facebook means business with Messenger Platform, VR and drones

This is why you should consider buying the 64GB variant of Samsung Galaxy S6 document.write(calculate_time('29 Mar 2015, 18:16'));Samsung has recently showcased us the Galaxy S6 and S6 Edge in India. Both these devices are available in 32GB / 64GB / 128GB capacity and already on pre-order in India.
How Meerkat is competing with competitor Periscope document.write(calculate_time('29 Mar 2015, 17:59'));Meerkat is reportedly getting ready to make some changes to its app in order to better compete with Periscope, the live streaming app that Twitter just scooped up earlier this month.
HTC One E9+ unveiled with confusing spec line up document.write(calculate_time('29 Mar 2015, 17:43'));HTC has let details of the One E9+ slip on its Chinese website, but something doesn't quite add up.
Android One: The story so far and where is it headed document.write(calculate_time('29 Mar 2015, 17:34'));Back at the Google I/O developer conference in June 2014, Google officially unveiled the Android One project, a program in which Google would work hand-in-hand with low-cost smartphone manufacturers in emerging markets.
Build it: The best gaming PC under £500 document.write(calculate_time('29 Mar 2015, 17:30'));We answer the age-old question of whether you should build or buy your gaming rig.
Manic Street Platformers: the games that had 90s bands hooked document.write(calculate_time('29 Mar 2015, 16:00'));From Richey Edwards's love of John Madden's American Football to Massive Attack's Kick Off 2 addition.
Hey BT, improve your coverage first, then we'll talk about perfect 4G document.write(calculate_time('29 Mar 2015, 15:30'));Cover the country properly before boasting about tiny pockets of lab-perfect 4G.
Facebook means business with Messenger Platform, VR and drones document.write(calculate_time('29 Mar 2015, 12:50'));Whilst VR inevitably dominated the company's F8 developer conference, Messenger Platform was a big move on the business front.
Why Samsung Galaxy S6 Edge is way more than a concept document.write(calculate_time('29 Mar 2015, 05:11'));World's first dual-curved glass display is almost here, and it's an idea that is here to stay.
The greatest Star Wars games of all time document.write(calculate_time('28 Mar 2015, 20:30'));We venture to a galaxy far, far away to highlight the greatest Star Wars games to ever hit the PC.
How virtual reality could revolutionise PC gaming document.write(calculate_time('28 Mar 2015, 19:30'));We look at what pieces of virtual reality tech are likely to change PC gaming forever, and the problems some genres present to VR.
A race into the night showed me the future of running document.write(calculate_time('28 Mar 2015, 18:30'));And why Apple might have made a slghtly better running watch than we all thought
Nvidia G-Sync vs AMD FreeSync document.write(calculate_time('28 Mar 2015, 18:30'));Are there really any differences between the two technologies that aim to smooth your PC games?
Why Toki made platform fans go ape document.write(calculate_time('28 Mar 2015, 18:00'));Remembering Ocean's surreal platformer through Amiga Power's 1991 review.
Best Amazon Prime Instant Video TV shows: 25 essential Amazon Prime TV series document.write(calculate_time('28 Mar 2015, 16:31'));These are the television shows you need to be watching on Amazon Prime right now.
A design for life: how game designers are shaping entire worlds document.write(calculate_time('28 Mar 2015, 16:08'));Talking the future of game design with some of the most important names in the business.
Missing: Hideo Kojima. Last seen eating sausages and making Metal Gear Solid 5 document.write(calculate_time('28 Mar 2015, 15:30'));And why Bloodborne has made me a loner.
Can a Mac be a gaming PC? How the world is changing for Mac gamers document.write(calculate_time('28 Mar 2015, 15:30'));For years, Mac gaming has been almost an oxymoron - not really worth considering if your love of games extends beyond Football Manager. But things are changing.
Why Star Citizen could be the best space game of all time document.write(calculate_time('28 Mar 2015, 15:20'));Star Citizen is a game of almost unfathomable scope, from the creator of the classic Wing Commander, Chris Roberts.
Seagate Seven Portable Drive document.write(calculate_time('28 Mar 2015, 14:36'));If James Bond had an external hard drive in his Aston Martin, it would be the Seagate Seven.Load more

View the original article here