Showing posts with label Information. Show all posts
Showing posts with label Information. Show all posts

Saturday, September 03, 2011

The Guardian On Assange

We are learning in numerous ways how hard it is, in a digital age, to keep control of information. Voice messages, emails, corporate documents, medical records, DNA, government secrets – all are vulnerable to hacking, snooping and simple spillage. From the moment a hacker (or, possibly, a whistleblower) passed a vast store of US government and military records to WikiLeaks it was always on the cards that this data would eventually spill out indiscriminately into the open. This week most of it has – accelerated by WikiLeaks itself, which chose to publish the state department cables in unredacted form.

This paper, and the four other news organisations involved in publishing heavily edited selections from the war logs and cables last year, are united in condemning this act. From the start of our collaboration, it was clear to the newspapers – and apparently accepted, if reluctantly, by WikiLeaks's founder, Julian Assange – that it was necessary to redact the material in order to minimise the potential risk to vulnerable people who might be placed in harm's way by publication. That joint exercise, which ended last December, has never been shown to have placed an individual's life at risk.

But, with the well-documented rifts in the original WikiLeaks team last year, the data was not secured. One copy was obtained by Heather Brooke, the freedom of information campaigner. It now appears that last December another WikiLeaks employee was responsible for a further leak when he placed the unredacted cables on a peer-to-peer site with an old password – motivated, it seems, by the arrest of Assange on allegations concerning his private life. It is not clear that even Assange – distracted by his legal actions over the Swedish sex allegations – knew of this act. This, to be clear, was not the original file accessed by the Guardian last year, which was, as agreed with WikiLeaks, removed from a secure file server after we had obtained a copy and never compromised.

A handful of people knew of the existence of this republished file and, realising its potential for harm, they did not publish any clues as to how it might be accessed. WikiLeaks, by contrast, tried to blame others for the leak, hinted at how it could be accessed, and then finally decided to publish it all to the world in an unredacted form.

Some WikiLeaks devotees and extreme freedom of information advocates will applaud this act. We don't. We join the New York Times, Der Speigel, Le Monde and El País in condemning it. Many of our newspapers' reporters and editors worked hard to publish material based on the cables in a responsible, comprehensible and contextualised form. We continue to believe in the validity and benefits of this collaboration in transparency. But we don't count ourselves in that tiny fringe of people who would regard themselves as information absolutists – people who believe it is right in all circumstances to make all information free to all. The public interest in all acts of disclosure has to be weighed against the potential harm that can result.

It had never been entirely clear whether Assange thought he had a consistent position on this issue. At various times he has scorned those who urged redaction; at others he has portrayed himself as an advocate of responsible redaction. He shows little or no understanding of the legal constraints facing less free souls than himself, often voicing contempt for publishers constrained by the laws of particular jurisdictions. At its best Wikileaks seemed to offer the hope of frustrating the most repressive and restrictive. But the organisation has dwindled to being the vehicle of one flawed individual – occasionally brilliant, but increasingly volatile and erratic. There was no compelling need, even with the recent disclosures of the internal leak, for WikiLeaks to publish all the material in the form in which it did. Julian Assange took a clear decision this week: he must take the responsibility for that.

Editorial of The Guardian




Thursday, March 10, 2011

Companies and Information

IN EARLY February Hewlett-Packard showed off its new tablet computer, which it hopes will be a rival to Apple’s iPad. The event was less exciting than it might have been, thanks to the leaking of the design in mid-January. Other technology companies have suffered similar embarrassments lately. Dell’s timetable for bringing tablets to market appeared on a tech-news website. A schedule for new products from NVIDIA, which makes graphics chips, also seeped out.

Geeks aren’t the only ones who can’t keep a secret. In January it emerged that Renault had suspended three senior executives, allegedly for passing on blueprints for electric cars (which the executives deny). An American radio show has claimed to have found the recipe for Coca-Cola’s secret ingredient in an old newspaper photograph. Facebook’s corporate privacy settings went awry when some of the social network’s finances were published. A strategy document from AOL came to light, revealing that the internet and media firm’s journalists were expected to write five to ten articles a day.

Meanwhile, Julian Assange has been doing his best to make bankers sweat. In November the founder of WikiLeaks promised a “megaleak” early in 2011. He was said to be in possession of a hard drive from the laptop of a former executive of an unnamed American bank, containing documents even more toxic than the copiously leaked diplomatic cables from the State Department. They would reveal an “ecosystem of corruption” and “take down a bank or two”.

“I think it’s great,” Mr Assange said in a television interview in January. “We have all these banks squirming, thinking maybe it’s them.” At Bank of America (BofA), widely thought to be the bank in question, an internal investigation began. Had any laptop gone missing? What could be on its hard drive? And how should BofA react if, say, compromising e-mails were leaked?

The bank’s bosses and investigators can relax a bit. Recent reports say that Mr Assange has acknowledged in private that the material may be less revealing than he had suggested. Financial experts would be needed to determine whether any of it was at all newsworthy.

Even so, the WikiLeaks threat and the persistent leaking of other supposedly confidential corporate information have brought an important issue to the fore. Companies are creating an ever-growing pile of digital information, from product designs to employees’ e-mails. Keeping tabs on it all is increasingly hard, not only because there is so much of it but also because of the ease of storing and sending it. Much of this information would do little damage if it seeped into the outside world; some of it, indeed, might well do some good. But some could also be valuable to competitors—or simply embarrassing—and needs to be protected. Companies therefore have to decide what they should try to keep to themselves and how best to secure it.

Trying to prevent leaks by employees or to fight off hackers only helps so much. Powerful forces are pushing companies to become more transparent. Technology is turning the firm, long a safe box for information, into something more like a sieve, unable to contain all its data. Furthermore, transparency can bring huge benefits. “The end result will be more openness,” predicts Bruce Schneier, a data-security guru.

From safe to sieve

When corporate information lived only on paper, which was complemented by microfilm about 50 years ago, it was much easier to manage and protect than it is today. Accountants and archivists classified it; the most secret documents were put in a safe. Copying was difficult: it would have taken Bradley Manning, the soldier who is alleged to have sent the diplomatic cables to WikiLeaks, years to photograph or smuggle out all the 250,000 documents he is said to have downloaded—assuming that he was not detected.

Things did not change much when computers first made an appearance in firms. They were used mostly for accounting or other transactions, known as “structured information”. And they were self-contained systems to which few people had access. Even the introduction in the 1980s of more decentralised information-technology (IT) systems and personal computers (PCs) did not make much of a difference. PCs served at first as glorified typewriters.

It was only with the advent of the internet and its corporate counterpart, the intranet, that information began to flow more quickly. Employees had access to lots more data and could exchange electronic messages with the outer world. PCs became a receptacle for huge amounts of “unstructured information”, such as text files and presentations. The banker’s hard drive in Mr Assange’s possession is rumoured to contain several years’ worth of e-mails and attachments.

Now an even more important change is taking place. So far firms have spent their IT budgets mostly on what Geoffrey Moore of TCG Advisors, a firm of consultants, calls “systems of record”, which track the flow of money, products and people within a company and, more recently, its network of suppliers. Now, he says, firms are increasingly investing in “systems of engagement”. By this he means all kinds of technologies that digitise, speed up and automate a firm’s interaction with the outer world.

Mobile devices, video conferencing and online chat are the most obvious examples of these technologies: they allow instant communication. But they are only part of the picture, says Mr Moore. Equally important are a growing number of tools that enable new forms of collaboration: employees collectively edit online documents, called wikis; web-conferencing services help firms and their customers to design products together; and smartphone applications let companies collect information about people’s likes and dislikes and hence about market trends.

It is easy to see how such services will produce ever more data. They are one reason why IDC, a market-research firm, predicts that the “digital universe”, the amount of digital information created and replicated in a year, will increase to 35 zettabytes by 2020, from less than 1 zettabyte in 2009 (see chart); 1 zettabyte is 1 trillion gigabytes, or the equivalent of 250 billion DVDs. But these tools will also make a firm’s borders ever more porous. “WikiLeaks is just a reflection of the problem that more and more data are produced and can leak out,” says John Mancini, president of AIIM, an organisation dedicated to improving information management.

Two other developments are also poking holes in companies’ digital firewalls. One is outsourcing: contractors often need to be connected to their clients’ computer systems. The other is employees’ own gadgets. Younger staff, especially, who are attuned to easy-to-use consumer technology, want to bring their own gear to work. “They don’t like to use a boring corporate BlackBerry,” explains Mr Mancini.

The data drain

As a result, more and more data are seeping out of companies, even of the sort that should be well protected. When Eric Johnson of the Tuck School of Business at Dartmouth College and his fellow researchers went through popular file-sharing services last year, they found files that contained health-related information as well as names, addresses and dates of birth. In many cases, explains Mr Johnson, the reason for such leaks is not malice or even recklessness, but that corporate applications are often difficult to use, in particular in health care. To be able to work better with data, employees often transfer them into spreadsheets and other types of files that are easier to manipulate—but also easier to lose control of.

Although most leaks are not deliberate, many are. Renault, for example, claims to be a victim of industrial espionage. In a prominent insider-trading case in the United States, some hedge-fund managers are accused of having benefited from data leaked from Taiwanese semiconductor foundries, including spreadsheets showing the orders and thus the sales expectations of their customers.

Not surprisingly, therefore, companies feel a growing urge to prevent leaks. The pressure is regulatory as well as commercial. Stricter data-protection and other rules are also pushing firms to keep a closer watch on information. In America, for instance, the Health Insurance Portability and Accountability Act (HIPAA) introduced security standards for personal health data. In lawsuits companies must be able to produce all relevant digital information in court. No wonder that some executives have taken to using e-mail sparingly or not at all. Whole companies, however, cannot dodge the digital flow.

To help them plug the holes, companies are being offered special types of software. One is called “content management”. Programs sold by Alfresco, EMC Documentum and others let firms keep tabs on their digital content, classify it and define who has access to it. A junior salesman, for instance, will not be able to see the latest financial results before publication—and thus cannot send them to a friend.

Another type, in which Symantec and Websense are the market leaders, is “data loss prevention” (DLP). This is software that sits at the edge of a firm’s network and inspects the outgoing data traffic. If it detects sensitive information, it sounds the alarm and can block the incriminating bits. The software is often used to prevent social-security and credit-card numbers from leaving a company—and thus make it comply with HIPAA and similar regulations.

A third field, newer than the first two, is “network forensics”. The idea is to keep an eye on everything that is happening in a corporate network, and thus to detect a leaker. NetWitness, a start-up company, says that its software records all the digital goings-on and then looks for suspicious patterns, creating “real-time situation awareness”, in the words of Edward Schwartz, its chief security officer.

There are also any number of more exotic approaches. Autonomy, a British software firm, offers “bells in the dark”. False records—made-up pieces of e-mail, say—are spread around the network. Because they are false, no one should gain access to them. If somebody does, an alarm is triggered, as a burglar might set off an alarm breaking into a house at night.

These programs deter some leakers and keep employees from doing stupid things. But reality rarely matches the marketing. Content-management programs are hard to use and rarely fully implemented. Role-based access control sounds fine in theory but is difficult in practice. Firms often do not know exactly what access should be assigned to whom. Even if they do, jobs tend to change quickly. A field study of an investment bank by Mr Johnson and his colleagues found that one department of 3,000 employees saw 1,000 organisational changes within only a few months.

This leads to what Mr Johnson calls “over-entitlement”. So that workers can get their jobs done, they are given access to more information than they really need. At the investment bank, more than 50% were over-entitled. Because access is rarely revoked, over time employees gain the right to see more and more. In some companies, Mr Johnson was able to predict a worker’s length of employment from how much access he had. But he adds that if role-based access control is enforced too strictly, employees have too little data to do their jobs.

Similarly, DLP is no guarantee against leaks: because it cannot tell what is in encrypted files, data can be wrapped up and smuggled out. Network forensics can certainly show what is happening in a small group of people working on a top-secret product. But it is hard to see how it can keep track of the ever-growing traffic that passes through or leaves big corporate IT systems, for instance through a simple memory stick (which plugs into a PC and can hold the equivalent of dozens of feature-length films). “Technology can’t solve the problem, just lower the probability of accidents,” explains John Stewart, the chief security officer of Cisco, a maker of networking equipment.

Other experts point out that companies face a fundamental difficulty. There is a tension in handling large amounts of data that can be seen by many people, argues Ross Anderson, of Cambridge University. If a system lets a few people do only very simple things—such as checking whether a product is available—the risks can be managed; but if it lets a lot of people do general inquiries it becomes insecure. SIPRNet, where the American diplomatic cables given to WikiLeaks had been stored, is a case in point: it provided generous access to several hundred thousand people.

In the corporate world, to limit the channels through which data can escape, some companies do not allow employees to bring their own gear to work or to use memory sticks or certain online services. Although firms have probably become more permissive since, a survey by Robert Half Technology, a recruitment agency, found in 2009 that more than half of chief information officers in America blocked the use of sites such as Facebook at work.

Yet this approach comes at a price, and not only because it makes a firm less attractive to Facebook-using, iPhone-toting youngsters. “More openness also creates trust,” argues Jeff Jarvis, a new-media sage who is writing a book about the virtues of transparency, entitled “Public Parts”. Dell, he says, gained a lot of goodwill when it started talking openly about its products’ technical problems, such as exploding laptop batteries. “If you open the kimono, a lot of good things happen,” says Don Tapscott, a management consultant and author: it keeps the company honest, creates more loyalty among employees and lowers transaction costs with suppliers.

More important still, if the McKinsey Global Institute, the research arm of a consulting firm, has its numbers right, limiting the adoption of systems of engagement can hurt profits. In a recent survey it found that firms that made extensive use of social networks, wikis and so forth reaped important benefits, including faster decision-making and increased innovation.

How then to strike the right balance between secrecy and transparency? It may be useful to think of a computer network as being like a system of roads. Just like accidents, leaks are bound to happen and attempts to stop the traffic will fail, says Mr Schneier, the security expert. The best way to start reducing accidents may not be employing more technology but making sure that staff understand the rules of the road—and its dangers. Transferring files onto a home PC, for instance, can be a recipe for disaster. It may explain how health data have found their way onto file-sharing networks. If a member of the employee’s family has joined such a network, the data can be replicated on many other computers.

Don’t do that again

Companies also have to set the right incentives. To avoid the problems of role-based access control, Mr Johnson proposes a system akin to a speed trap: it allows users to gain access to more data easily, but records what they do and hands out penalties if they abuse the privilege. He reports that Intel, the world’s largest chipmaker, issues “speeding tickets” to employees who break its rules.

Mr Johnson is the first to admit that this approach is too risky for data that are very valuable or the release of which could cause a lot of damage. But most companies do not even realise what kind of information they have and how valuable or sensitive it is. “They are often trying to protect everything instead of concentrating on the important stuff,” reports John Newton, the chief technology officer of Alfresco.

The “WikiLeaks incident is an opportunity to improve information governance,” wrote Debra Logan, an analyst at Gartner, a research firm, and her colleagues in a recent note. A first step is to decide which data should be kept and for how long; many firms store too much, making leaks more likely. In a second round, says Ms Logan, companies must classify information according to how sensitive it is. “Only then can you have an intelligent discussion about what to protect and what to do when something gets leaked.”

Such an exercise could also be an occasion to develop what Mr Tapscott calls a “transparency strategy”: how closed or open an organisation wants to be. The answer depends on the business it is in. For companies such as Accenture, an IT consultancy and outsourcing firm, security is a priority from the top down because it is dealing with a lot of customer data, says Alastair MacWillson, who runs its security business. Employees must undergo security training regularly. As far as possible, software should control what leaves the company’s network. “If you try to do something with your BlackBerry or your laptop that you should not do,” explains Mr MacWillson, “the system will ask you: ‘Should you really be doing this?’”

At the other end of the scale is the Mozilla Foundation, which leads the development of Firefox, an open-source browser. Transparency is not just a natural inclination but a necessity, says Mitchell Baker, who chairs the foundation. If Mozilla kept its cards close to the chest, its global community of developers would not and could not help write the program. So it keeps secrets to a minimum: employees’ personal information, data that business partners do not want made public and security issues in its software. Everything else can be found somewhere on Mozilla’s many websites. And anyone can take part in its weekly conference calls.

Few companies will go that far. But many will move in this direction. The transparency strategy of Best Buy, an electronics retailer, is that its customers should know as much as its employees. Twitter tells its employees that they can tweet about anything, but that they should not do “stupid things”. In the digital era of exploding quantities of data that are increasingly hard to contain within companies’ systems, more companies are likely to become more transparent. Mr Tapscott and Richard Hunter, another technology savant, may not have been exaggerating much a decade ago, when they wrote books foreseeing “The Naked Corporation” and a “World Without Secrets”.


Sunday, January 16, 2011

Wednesday, April 07, 2010

Wednesday, December 10, 2008

Br(a)in Scan

Enlightenment man

A FORMER vice-president, Al Gore, and one of the co-founders of Google, Larry Page, were already seated on the stage of Google’s “Zeitgeist” conference, an exclusive gathering for the intelligentsia, but the third chair was still empty. After a few minutes, Sergey Brin, the other founder of the world’s biggest internet company, joined them. Messrs Gore and Page gave him the floor, because Mr Brin had something important to say.

The global “thought leaders” in the audience at Zeitgeist had just spent two days talking about solving the world’s biggest problems by applying the Enlightenment values of reason and science that Google espouses. But Mr Brin, usually a very private man, opened with an uncharacteristically personal story. He talked about his mother, Eugenia, a Jewish-Russian immigrant and a former computer engineer at NASA, and her suffering from Parkinson’s disease.

The reason was that Mr Brin had recently discovered that he has inherited from his mother a mutation of a gene called LRRK2 that appears to predispose carriers to familial Parkinson’s. Thus Mr Brin, at the age of 35, had found out that he had a high statistical chance—between 20% and 80%, depending on the study—of developing Parkinson’s himself. To the surprise of many in the audience, this did not seem to bother him.

One member of the audience asked whether ignorance was not bliss in such matters, since knowledge would only lead to a life spent worrying. Mr Brin looked genuinely puzzled. First of all, he began, who’s talking about worrying? His discovery was merely a statistical insight, and Mr Brin, a wizard at mathematics, uses statistics without fretting about them. More importantly, he went on, his knowledge means that he can now take measures to ward off the disease. Exercise helps, as does smoking, apparently—although Mr Brin, to laughter, denied taking up cigarettes (a vice of his father’s).

But Mr Brin was making a much bigger point. Isn’t knowledge always good, and certainly always better than ignorance? Armed with it, Mr Brin is now in a position to fund and encourage research into this gene in particular, and Parkinson’s in general. He is likely to contact other bearers of the gene. In effect, Mr Brin regards his mutation of LRRK2 as a bug in his personal code, and thus as no different from the bugs in computer code that Google’s engineers fix every day. By helping himself, he can therefore help others as well. He considers himself lucky.

The moment in some ways sums up Mr Brin’s approach to life. Like Mr Page, he has a vision, as Google’s motto puts it, of making all the world’s information “universally accessible and useful”. Very soon after the two cooked up their new engine for web searches, in the late 1990s at Stanford University, they began thinking about information that is today beyond the web. Their vast project to digitise books has been the most controversial so far, prompting a lawsuit from a group of publishers in 2005 that was resolved in October. But Messrs Brin and Page have always taken a special interest in the sort of information that most people hold dearest: that about their health.


Mr Brin’s faith in the transformative power of knowledge also has personal roots. He was born in the Soviet Union, an opaque society and one often hostile to his Jewish parents. His father, Michael, wanted to be an astronomer, but Russia’s Communists barred Jews from the physics and astronomy departments at universities. So Michael Brin became a mathematician, as his father had been. This too was difficult for Jews, who had to take special, more difficult entrance exams. Both Michael and Eugenia passed nonetheless.

But it was clear that they had to get out to lead fulfilling lives. They applied for an exit visa in 1978. Michael Brin was fired for it, and his wife resigned. Fortunately, they received their visas, and in 1979 emigrated to America. Sergey was six at the time. He went to a Montessori school and learnt English, though he retains a hint of a Russian accent to this day. Language did not come naturally to him. Maths did. So Sergey followed his father and grandfather into mathematics, adding computer science at the University of Maryland, and then went to Stanford to get his PhD. Silicon Valley, with its casual dress, sunshine, optimism and curiosity was an instant fit.

At an orientation for new students he met Larry Page, the son of computer scientists and also of Jewish background. They instantly annoyed each other. “We’re both kind of obnoxious,” Mr Brin once said—as ever, half in jest, half serious. They decided to disagree on every subject that came up in conversation, and in the process discovered that being together felt just like home for both of them. They became intellectual soul-mates and close friends.

Mr Brin was interested in data mining, and Mr Page in extending the concept of inferring the importance of a research paper from its citations in other papers. Cramming their dormitory room full of cheap computers, they applied this method to web pages and found that they had hit upon a superior way to build a search engine. Their project grew quickly enough to cause problems for Stanford’s computing infrastructure. With a legendary nudge—from Andy Bechtolsheim, a prominent Silicon Valley entrepreneur and investor who wrote a $100,000 cheque to something called Google Inc—Messrs Page and Brin established a firm by that name.

Suspending their PhD programmes with Stanford’s blessing, the two became entrepreneurs of a typical Silicon Valley start-up. They even worked out of a garage for a time, as Valley lore seems to require. One advantage of this particular garage was that its owner, an early Google employee, had a sister, Anne Wojcicki, who got along well with Mr Brin and has since become his wife. Ms Wojcicki, moreover, had an interest in health information, and began talking to Mr Brin about ways to improve access to it.

Google began its astonishing rise. On the advice of their investors, the founders hired Eric Schmidt, a technology veteran, as chief executive, to provide “adult supervision”. Mr Schmidt’s role was to reassure Wall Street types that Google was responsibly run, in preparation for a stockmarket listing. As Google filed its papers, the world discovered that Google had added to its breakthrough in search technology a fantastically lucrative revenue model: text ads, related to the keywords of web searchers, that charge an advertiser only when a consumer actually clicks and thereby expresses an interest.



“Solving big problems is easier than solving little problems,” Mr Page likes to say.

As Google’s share price went up, Messrs Brin and Page became multi-billionaires. Wealth has its effects, and stories began leaking out. There was, for instance, the Boeing 767 that Messrs Page, Brin and Schmidt began sharing and that Mr Brin was eager to turn into a “party plane” with beds sufficiently large for comfortable “mile-high club” membership.

Yet their indulgences tend to share three less decadent features. First, they enjoy being just plain goofy. During their rare meetings with the press, Mr Schmidt will typically talk the most but say least, rattling off official company positions until the journalists succumb to exhaustion. Messrs Page and Brin, meanwhile, will sit next to him and exchange the odd knowing look, then add the occasional short, inappropriate and mildly embarrassing—but often hyper-perceptive—aside that livens things up and forces Mr Schmidt to backpedal for a few minutes.

Second, they are drawn to pranks and diversions that are educational—and ideally outrageous. They used to be regulars at Burning Man, a festival in the Nevada desert where oddballs display innovative art and mechanical creations. And Mr Brin has invested $5m—in effect, the price of his ticket—in a company based in Virginia that arranges trips for private individuals to the International Space Station on Russian Soyuz spacecraft.


Third, Messrs Brin and Page appear to be trying to do good. They have been mocked endlessly, and understandably, for their corporate motto (“Don’t be evil”), but probably mean it. When they start to look evil, it is usually out of naivety. Google went into China agreeing to censor its search results to appease the Communists, but did so in the belief that a lot more information, with omissions clearly labelled, makes the Chinese better off. Mr Brin certainly had the Russia of his youth in mind, but agonised over the decision.

Despite the best intentions of Google’s founders, privacy advocates worry that it knows a dangerous amount about its users, which might be released inadvertently if something goes wrong. And there is growing concern about Google’s dominance of the internet-advertising market.

But Messrs Page and Brin have other things on their minds. “Solving big problems is easier than solving little problems,” Mr Page likes to say, and both preach a “healthy disregard for the impossible”. They hope, for instance, to help solve the world’s energy and climate problems via google.org, Google’s philanthropic arm. Health is another big problem. Mr Page, Mr Brin and his wife, Ms Wojcicki, have brainstormed with people such as Craig Venter, a biologist who helped map the human genome. Mr Brin instinctively regards genetics as a database and computing problem. So does his wife: she co-founded, with Linda Avey, a firm called 23andMe that lets people analyse and compare their genomes (made up of 23 pairs of chromosomes).

The relationship between Google and tiny 23andMe has on occasion raised eyebrows. Google is an investor, although Mr Brin has recused himself from decisions about it. But Mr Brin and Ms Wojcicki are quite the marketing pair. When the global political and economic elite gathered at Davos, a big draw was 23andMe’s “spit party” where the rich and famous salivated into tubes to provide DNA samples. A cynical view of Mr Brin’s Zeitgeist announcement is that it was just a marketing stunt. Ms Wojcicki and Ms Avey were in the room as he spoke.

More likely, Mr Brin and his wife have genuine faith in the value of genetic knowledge for its own sake. They get their kicks by comparing whether they share the gene that makes urine stink after eating asparagus, or the one that determines whether earwax is mealy or oily. But they do ultimately regard it as code. And code, as Messrs Brin and Page often say, benefits from many eyeballs, which is why Google typically uses and releases open-source software, such as its web browser and mobile-phone operating system. (It does, however, keep its search and advertising algorithms private.)

Mr Brin was therefore setting a public example with his announcement at Zeitgeist. Let everybody discover their genomes, through 23andMe or another firm, and then feel comfortable sharing the code so that others—patients, doctors, researchers—can get to work crunching the data and looking for the bugs. Throughout history, the prospect of greater access to knowledge has frightened some people. But those are not the people that Sergey Brin mixes with in Silicon Valley.