Learn the history and key concepts of how the internet came to be

The Fundamentals of the Internet

Network Communications

Before the late 50's, computers used to work on one thing at a time. Two concepts that were key to the internet were developed at that time. Remote communications or the ability to remotely dial up to a computer and control it from another location and time sharing, where computers could share processing power to perform multiple tasks from many users at the same time.

The Cold War

During the cold war, fears of an imminent nuclear attack from the Soviet Union caused the US Military to think about creating a national network that would work in the case of a nuclear attack. So, the US founded DARPA (Defense Advanced Research Project Agency) in Feb 1958. In 1966, DARPA developed a large computer network called the ARPANET which was the precursor to the internet. DARPA used or developed key technologies that established the backbone of how the internet operates.

Packet switching

In 1962, Paul Baran of the RAND corporation was commissioned by the US Air Force to develop a way to maintain a working network in case of a nuclear attack. He developed the concept of a packet switched network.

Before this concept, a network consisted of machines connected in sequence one after the other. If one of those machines was destroyed, communications to the rest of the network was interrupted.

With packet switching, a message is broken down into smaller pieces. The pieces have information about the location of the sender, the receiver as well as data about how to put itself back together. If a machine in the network is damaged, the sender/receivers can send any missing packet through another machine.

DARPA originally used a packet switching system called NCP, which worked fine because just a few computers were connected, but when other networks already running their own protocols had to be connected, a new intermediary protocol had to be developed...TCP (Transmission Control Protocol). The advantage of TCP over other protocols is that it allowed diverse computer network protocols to communicate with each other.

IP

In order for computer networks to be able to communicate with each other, they need to know how to find each other. When TCP was created to organize the way packets are divided, a separate specification known as Internet Protocol was created to help computers locate each other over the internet.

An IP address consists of 4 sets of numbers separated by a period. A typical IP address looks like this: 216.27.61.137. Because each of the numbers can have a value from 0 to 255, there are almost 4.3 billion available values. These addresses are like zip codes and help computers find each other.

Every computer network connected to the internet has a unique IP address. Not every computer connected to the internet has an individual IP address though. If that were the case, we would have run out of IP addresses long ago. Instead individual networks are in charge of sending information to computers inside their networks. Since most computer networks run on a TCP/IP addressing system, this usually results in two sets of IP addresses. One that finds the larger network and one that finds the computer inside the network.

DNS

Computers can easily find each other using numbers like IP addresses, but people have a harder time memorizing sets of numbers, so in 1983, the University of Wisconsin created a Domain Name System (DNS), which allows people to use names instead of IP addresses to reach computers. The system consists of a database that translates names like planetoftheweb.com to IP addresses like: 74.208.116.67.

Because of the distributed nature of the internet it would be no good to have a single computer host this database, so there are many computers on the internet that perform this useful function.

Clients vs Servers

Most people connect to the internet to perform specific functions, run programs, communicate with one another, etc. Because of this, we need more than just a way to connect different networks, we need to have certain machines run programs to help us do things.

When we visit a website, the computer you're using to visit the page is called a "Client". A client is a machine that is connected to a "Server" whose job it is to provide information back to the machine which requested it.

A server has to be on at all times in order to provide information whenever it is requested. A server has to have certain software that allows it to handle multiple types of requests from multiple clients. A client, on the other hand doesn't need to be connected to the internet all the time.

Protocols

When most people think of the internet, they usually think of the World Wide Web or Email, but in reality there are many types of information traveling through the internet including: News, Gopher, Telnet, Electronic Mail, File Transfer Protocol and HyperText Transfer Protocol. These types of information have their own languages used for communicating known as protocols.

When you use web browser, for example, you are working with the HTTP protocol. When you use an email program, you're working with Electronic mail. Besides these, a web professional will probably also use FTP and maybe Telnet. Because of the World Wide Web, News and Gopher which were essentially ways of organizing information are less used today.

URL

In order to locate types of information on a server, the URL (Uniform Resource Locator) was created. In the same way that the TCP/IP protocol allows you to find a specific computer or network on the internet, a Uniform Resource Locator allows you to find specific types of information inside a server. The URL specifies the type of information a client is requesting.

A URL consists of a protocol ID like http://, or ftp://, then the type of request being made www. or ftp., the domain name, then optionally the port number through which the request will be handled :80 or :21. Then a directory of files separated by slashes / followed by the filename which may have an extension like .html or .php. Finally, additional things can be passed through a URL like server variables after a ? mark.

The servers are in charge of routing that information to the appropriate computer or software running those protocol services. In a large network, one computer may be running the web server, another one the email server, etc. The client usually doesn't care how the server handles all this, so the URL serves as a generic way to access different types of information inside a network.

Birth of the World Wide Web

In 1990, Tim Berners-Lee proposed using hypertext, or text with references or links to other information that could be clicked on to connect pages of data together through the TCP and DNS system and form a web of information on the internet. The World Wide Web along with Email are the most used protocols on the Internet.

blog comments powered by Disqus