Progress towards making the net more multi-lingual is welcome says Bill Thompson.
It is 40 years to the week since the first data packets were sent over the Arpanet.
That was the research network commissioned by the US Department of Defense Advanced Research Projects Agency (Arpa) to see whether computer-to-computer communications could be made faster, more reliable and more robust by using the novel technique of packet switching instead of the more conventional circuit switched networks of the day.
Instead of connecting computers rather as telephone exchanges work, using switches to set up an electric circuit over which data could be sent, packet switching breaks a message into chunks and sends each chunk - or packet - separately, reassembling them at the receiving end.
Late on October 29 1969 Charley Kline sat down at a computer in the computer laboratory at UCLA, where he was a student, and established a link to a system at the nearby Stanford Research Institute, sending the first data packets over the nascent Arpanet.
Bill Thompson
Later in the year permanent links were made between four sites in the US, and over the following years the ARPANET grew into a worldwide research network.
Arpanet was one of the computer networks that coalesced into today's internet, and the influence of the standards and protocols established there can still be seen today, making this anniversary as important for historians of the network society as July's celebration of the 1969 Apollo 11 landing is for those who study space science.
Technology does not stand still, and over the years the way computers communicate with each other has changed enormously. Early Arpanet computers used the Network Control Protocol to talk to each other, but in 1983 this was replaced with the more powerful and flexible TCP/IP - the transmission control protocol and internet protocol.
Today we are in the process of migrating our networks from IP version 4 to IP version 6, which allows for more devices to be connected to the network and is more secure and robust, but work continues to improve and refine all aspects of the network architecture.
One area that is changing is the domain name system, DNS. This links the unique number that identifies every device on the internet with one or more names, making it possible to type in "www.bbc.co.uk" and go to the right web server without having to remember its number.
Designed by engineer Paul Mockapetris in 1983, DNS is a vital component of the network as well as the web, including e-mail and instant messaging. Every time a programme uses a name for a computer instead of a number, DNS is involved.
However DNS, like so much of the network's architecture, was developed by English-speaking westerners, and its original design only allowed standard ASCII characters to be used in names.
ASCII, the American Standard Code for Information Interchange, is a way of representing letters, numbers and punctuation in the binary code used by computers, and was originally based on old telegraphic codes.
It works really well for English, but had to be extended and updated to cope with other alphabets, and has now been replaced by the much more powerful and capable Unicode standard, able to represent non-Latin languages as well as those based on the Latin alphabet.
Being able to write in your own language is one thing, but it's also important to be able to have e-mail or website addresses that use it. Unfortunately the design of DNS meant that key aspects would not work with anything other than ASCII, making it impossible to simply add in Chinese or Arabic characters to domain.
Work has been going on since the mid 90's to change this and provide what are called "internationalized domain names", and many organisations are now able to have websites and email addresses that include Chinese, Cyrillic, Hebrew, Arabic and many other alphabets.
The process took a significant step forward this week when Icann, the international body that looks after domain names, fast-tracked a proposal to provide internationalised versions of two letter country domains, such as .uk and .jp.
This will finally allow users of these domains to have a domain name that is entirely in characters based on their native language, and marks an important point in the internationalisation of the whole internet.
It has taken a long time to make this happen, but the problems of re-engineering such a key part of the network infrastructure without breaking anything are enormous, and anyone who reads through the technical documentation will see just how complex the process has been.
And it was definitely necessary to do it properly - the fuss over the recent retuning of Freeview boxes in the UK was bad enough, but trying to persuade a billion internet users to update their software to support a new form of DNS would have been impossible.
Over the next five years the majority of new internet users will come from the non English-speaking world. It's good to see that those of us who have helped build the network so far are making it more welcoming for them.
Bill Thompson is an independent journalist and regular commentator on the BBC World Service programme Digital Planet.
No comments:
Post a Comment