Geek News

What you need to know before buying a computer

Guru 42 -

At last the secret of what you need to know before buying a computer is revealed, there is no one size fits all answer. But you don’t need to be a world class geek to learn computer buzzwords and understand some basic concepts before you shop for your next computer.

I usually try to stay out of the Apple versus Microsoft debates. Since I am updating some content on desktop operating systems on Computerguru.net I thought I would use this blog post to address the often asked question of "what computer should I buy" and add this perspective. I will also  introduce a few new articles to answer some frequently asked questions relevant to someone shopping for a computer.

Recently on an online forum the question of "what computer should I buy" was asked based on the idea that a MacBook Pro is inherently the best laptop out there. The person asking the question was looking for reasons to buy a MacBook Pro, but gave no clues on how they are going to use it. That is a very important factor in answering the question! I never answer any questions on "what computer should I buy" for friends and family until I ask several questions.

I laughed as I read one of the answers that stated, "If all you are going to do is web surfing, social media, and email you don’t need a MacBook Pro." Yea, that's right. There are Chromebooks as well as cheap Windows notebooks that could do that for a lot less money!

My best advice to anyone looking to buy a computer, think long and hard about how you are going to use it, and find other people with the same wants and needs, and ask them what they own, what they like and not like about it.

I am not a graphics designer or an artist, those are the type of users who are typically the Apple fans. I have been working in enterprise computer networking for more than 20 years, started working on desktop computers in the 1980s. I look at the computer as a tool, and I look at what is the best tool for the task at hand. I have no loyalties to any specific brands.

Many answers comparing Microsoft to Apple often use various luxury car to cheap foreign comparisons, implying if you could afford the expensive luxury car, but choose otherwise, you must be a fool. So let me run with that analogy.

Take a step back and look at the history of Apple versus Microsoft.  In the 1990s when Windows 95 dominated the desktop, Microsoft was the Ford F-150 pick up truck.  Not many people would describe the Ford F-150 pick up truck as a sexy luxury vehicle, but many would describe it as the work horse vehicle that gets the job done.  There's a good case to be made that the folks marketing to the pick up truck users have a different plan than those looking to sell the sexy luxury vehicle.

A computer is a tool I use for work, as well as recreation. I work in a business world that is Microsoft based. We are required to purchase a specific brand of Windows based computers, not my favorite brand, but that's my environment. My problems are no so much with Windows as it is the vendors that support our users create applications that run on old Microsoft operating systems. I have to deal with home cooked applications that are designed for last generation Windows computers. That's my world.

I have had iPads and various other Apple products in my home, and they never got used. Even if the interface is slightly different, I don't have time to deal with it. I have had access to Kindles and Nooks, and they never got used. I can put an application on my Windows notebook that reads the books, so why do I need to learn a new interface? It's called being lazy, I know it is, but I have no personal reason to care about Apple products. It's nothing personal.

If one of my family members wants to buy a luxury car, I will be happy to ride in it. If money were no object, tomorrow I would go out and buy a new Ford F-150 pick up truck that best suited my needs.

I don't get emotionally attached to my computers or automobiles. They are tools. Nothing more.

You too can understand computer buzzwords

Since 1998, ComputerGuru.net has attempted to provide self help and tutorials for learning basic computer and networking technology concepts, maintaining the theme, "Geek Speak Made Simple." Recently I updated the Drupal content management software for Computerguru and updated a few pages.

Based on commonly asked questions, I have added several new pages to the section Common technology questions and basic computer concepts. On computer operating systems we have added an article that explains the major differences between desktop computer operating systems and one on installing Linux and understanding all the different Linux distributions.

I get a lot a questions on computer cables and finally finished up this article on Ethernet computer network cable frequently asked questions answered and an article explaining computer network modular connectors and telephone registered jacks.

And based on many questions on printers, we had some fun coming up with this article, the ugly truth about computer printers.

Yes, I know that sounds like a lot of geek speak, but we do our best to break it all down into small bite sized chunks, so it is easy to digest.  Please take a few minutes to check out the new content, and please share it with your geek friends on social media.

Any topics need covered? Any questions missing?

Are there any buzzwords bothering you?  Something else you would like us to cover here at the Guru 42 Universe?  Let us know: Guru 42 on Twitter -|- Guru 42 on Facebook -|- Guru 42 on Google+ -|- Tom Peracchio on Google  

Save

Tags: 

Wireless Networks in Simple Terms WLAN and Wi-Fi defined

ComputerGuru -

The term Wi-Fi is often used as a synonym for wireless local area network (WLAN). Specifically the term "Wi-Fi" is a trademark of a trade association known as the Wi-Fi Alliance. From a technical perspective WLAN technology is defined by the Institute of Electrical and Electronics Engineers (IEEE).

In computer networking everything starts with the physical layer, which for many years was a copper wire. The physical layer was expanded to include anything that represent the wire, such as fiber optic cable, infrared or radio spectrum technology.

Wireless network refers to any type of computer network that is not connected by cables of any kind. While cell phone technology is often discussed as a form of wireless networking, it is not the same as the wireless local area network (WLAN) technology discussed here.

What is Wi-Fi?

The term Wi-Fi has often been used as a technical term to describe wireless networking. Wi-Fi is actually a trademark of the Wi-Fi Alliance, a global non-profit trade association formed in 1999 to promotes WLAN technology. Manufacturers may use the Wi-Fi trademark to brand products if they are certified by The Wi-Fi Alliance to conform to certain standards.

A common misconception is that Wi-Fi is an acronym of Wireless fidelity, it is not. The Wireless Ethernet Compatibility Alliance wanted a cooler name for the new technology as the IEEE 802.11b Alliance was not all that catchy. The marketing company Interbrand, known for creating brand names, was hired to create a brand name to market the new technology, and the name Wi-Fi was chosen. The term 'Wi-Fi' with the dash, is a trademark of the Wi-Fi Alliance.

IEEE 802.11 defines WLAN technology

The actual technical standards for wireless local area network (WLAN) computer communication are know as IEEE 802.11. IEEE refers to the Institute of Electrical and Electronics Engineers a non-profit professional association formed in 1963 by the merger of the Institute of Radio Engineers and the American Institute of Electrical Engineers.

IEEE 802 refers to a family of IEEE standards dealing with networks carrying variable size packets, which makes it different from cell phone based networks, 802.11 is a subset of the family specific to WLAN technology. Victor "Vic" Hayes was the first chair of the IEEE 802.11 group which finalized the wireless standard in 1997.

This link takes you to the 802.11 specification that contains all the geek speak on how it works. --> IEEE-SA -IEEE Get 802 Program
https://standards.ieee.org/about/get/802/802.11.html

How fast is Wi-Fi?

Wi-Fi speed is rated according to maximum theoretical network bandwidth defined in the IEEE 802.11 standards.

For example:

IEEE 802.11b - up to 11 Mbps

IEEE 802.11a - up to 54 Mbps

IEEE 802.11n - up to 300 Mbps

IEEE 802.11ac - up to 1 Gbps

IEEE 802.11ad - up to 7 Gbps

If you look at the IEEE 802.11 Wireless LANs standards you will see the ongoing evolution with several standards under development at this time to increase speeds even more.

Keep in mind that WiFi speed is how fast your internal network is, as in wireless LANs (Local Area Network)

Fast WiFi does not mean fast internet connection, it has nothing to do with the speed or bandwidth of you internet access.

How does Wi-Fi work?

A Wi-Fi enabled device such as a personal computer or video game console can connect to the Internet when within range of a device such as a wireless router connected to the Internet. wireless local area network (WLAN) technology allows your device to connect to the router, which in turn connects you to the internet.  In order to connect to the internet, you need a unique IP (internet protocol) address. On your home network, when your router is connected to the internet, it has a public address, that is the one that faces the internet, and is unique in relationship of other routers on the internet.

Your router also has a local IP Address of something like 192.168.1.2 and this is a private IP address space. Addresses beginning with 192.168 cannot be transmitted onto the public Internet and are typically used for home local area networks (LANs). If you have four home computers, your router creates a home network and the four home computers have a unique number in relationship to each other. Your local computers connect to the router, either by a wire plugged into the router, or through a wireless signal.

Routers are used to create logical borders between networks, and in this allow a gateway, such as an access point to the internet to be shared. In geek speak terms subnetting can be very complex, but what is happening here is the process know as subnetting.

Tags: 

The ugly truth about computer printers

ComputerGuru -

The printer is the source of pain and problems for every computer user.  The ugly truth about computer printers is that everyone has one and they all stink.

A printer is very mechanical, there are a lot of moving parts.  Every printer from the very simplest, to the most complex, has numerous gears, springs, and rollers that all need to move in perfect harmony in order for your printer to work.  

In understanding why computer printers are a source of frustration, let me explain some of the other components of a typical computer system. On your home desktop computer you have a large box that everything plugs into. I hear people call this box a CPU, some call it a hard drive.  Technically the CPU is one small part on the main circuit board that sits inside that box.  The main circuit board, as well as the CPU and memory modules that plug into are solid state, that means they are all electronic. Unless you get hit with a power surge or some external electrical issue, it is rare that the electronics of a computer wears out over time. Even hard drives that once were very mechanical are now becoming solid state, which means no moving parts and much more reliable.

Same thing with your display, what we used to call a monitor.  Back in the days of CRT Monitors, the CRT (Cathode Ray Tube) wore out over time, it degraded because it heated up. In my experiences over the years I've seen some monitor failures. Not so much with modern displays, like the computer itself, they are now all electronic and less likely to degrade over time.

Things like keyboards and mice still have a few mechanical parts to them, but they don't wear out often.  When they do wear out, they are simple to replace, and people don't get too excited when they need replaced.

But alas, the printer, the pain of every computer user.  You just typed that report and you need it now.  You are leaving for the movies and you want to print the tickets, and the printer won't work.  There is never a convenient time for the printer to break.  

Even the simplest of printers has a handful of gears, springs, and rollers, that wear out over time.  The paper tray gets banged around every time you fill it up.  Every time someone takes out a paper tray, they bend something, they twist something, a part gets knocked off.  With the need to lower the cost of the printers, many of these mechanical parts are made from very low quality metal and plastic.

And here is one element of printers that many people over look, the paper.  When the air gets dry, when the heat is on in the winter, the paper gets full of static electricity, so it jams more often.  Instead of taking the paper out of the tray, fanning it a bit, flipping it over, you bang the paper tray a few times.  Maybe you yank the paper out when it jams, bending and stretching the metal arms and guides on the paper tray.

When the weather is damp and humid, that will also cause the paper to jam. Do you close the wrapper on your paper when it is just laying around?  Or is it just thrown on a shelf outside the wrapper?  I have seen many print quality issues caused by paper. Having spent a long career in office automation and computer networking I could write a book on the subject of printer problems because of paper.  The hardest part in answering this was keeping it brief.

Types of printing technology

Another issue you have with printers is consumable supplies like ink and toner. Every freaking printer model has its own unique ink or toner cartridge.  When you try to save money by refilling cartridges it is a crap shoot.  More often than not I have seen refilled cartridges cause many problems.

In the early days of desktop computers the dot matrix printer was the standard.  They could be pretty noisy as the small needles in the print head fired through the ribbon creating dots of information on your paper. Ribbons faded over time, and copy quality was not great, but printer ribbons were fairly inexpensive compared to modern ink cartridges. The boxes of paper with the tractor feed holes seems a little primitive compared to the plain paper printers of today, but in many ways the tractor feed paper was a more problem free solution than many of the modern printers with paper trays.

Inkjet printers began replacing dot matrix printers offering higher quality. A less noisy printer with higher quality could be a blessing, instead the inkjet technology was more of a curse. The color inkjet printer uses multiple color ink cartridge that includes a print head as a part of a replaceable ink cartridge that adds to the expense of the cartridge. The cartridges themselves have very narrow inkjet nozzles that are prone to clogging, and they dry out over time. New technology intelligent ink cartridges that communicate with the printer add another level of complexity, and another potential point of failure

Laser printers have been around since the very early days of desktop computers. They are high quality printers, but were for many years, very high cost.  In the early days it was rare to have a laser printer on your home computer, but over the years the quality has increased, and the price has dropped dramatically.  You can get a low cost black print laser printer for less than a hundred dollars. That is what I have in my home office, I have given up on low cost ink jet printers. Most of the times I use my home office laser printer to print a document such as a receipt, or maybe my tickets for a movie or sporting event, I don't need color for that.

The price of a laser printer toner cartridge sounds expensive, the last one I replaced was over $50, but they last ten times longer than an ink jet cartridge. If you look at it on a cost per copy basis, a laser printer is significantly cheaper to own than an ink jet. If I really need a high quality color copy, I can take a document on a USB drive to a local shop and get one there.

Prices have been dropping in recent years, and color laser printers cost a fraction of what they once cost.  If you need a color printer and print more than a few copies a month, do some calculations on the cost per copy of a color laser printer.  You might be surprised to see that over the long haul a color laser printer is not as expensive to own as an ink jet.

It's not your fault for buying a crappy printer

Between having a home computer system as well as working in the field of office automation and business machines since the early 1980s, I have worked with numerous brands of printers and printing equipment. It is hard to recommended a specific brand or specific model of printer at any time because they are constantly changing. In a marketplace that is always shopping for low cost, often a manufacturer will cut corners to lower costs, and a usually reliable brand will have some really horrible models.  

We are discussing the computer printer here as a hardware device, but software issues such as finding the proper drivers for your current computer operating and getting Wi-Fi to work on your network can also create problems. Shop wisely, read over consumer reviews of the currently popular printers to see the potential problems for a model you are considering buying.

The primary reason for a printer being the most likely part of your computer system to cause you pain comes down to the printer having the most moving parts, but there are also many other issues dealing with the supplies such as paper, ink, and toner. Maybe you won't feel any better about all the printing problems you are having after reading this article, but at least you will know, it's not your fault for buying a crappy printer, they all stink.

Tags: 

Computer network modular connectors and telephone registered jacks

ComputerGuru -

The plastic plugs on the ends of telephone wiring and computer cables are defined by various technical standards. Because these standards are full of technical definitions and acronyms, it is easy to see how street slang becomes the accepted definition for many of the plastic plugs.

It is important to understand that connecting devices together is more than just matching up connector ends on a piece of wire. Just because you can find an adapter to make your cable fit into a connection is no guarantee that the device will communicate on your network. Some connectors that look exactly alike could have different wiring configuration.

In the world of technology street slang, or common buzzwords, often become the accepted the description of something rather than the specific technology standard. For example describing Ethernet patch cables as using RJ45 connectors illustrates one of the most mis-used terms in the world of technology.

We will do our best to break down some of the buzzwords and jargon to help you understand the differences in the terms.

Modular connectors

A modular connector is an electrical connector that was originally designed for use in telephone wiring, but has since been used for many other purposes. Many applications that originally used a bulkier, more expensive connector have converted to modular connectors. Probably the most well known applications of modular connectors are for telephone jacks and for Ethernet jacks, both of which are nearly always modular connectors.

Modular connectors are designated with two numbers that represent the quantity of positions and contacts, for example the 8P8C modular plug represents a plug with having eight positions and eight contacts.

Do not assume that connectors that look the same are wired the same. Contact assignments, or pin outs, vary by application. Telephone network connections are standardized by registered jack numbers, and Ethernet over twisted pair is specified by the TIA/EIA-568 standard.

Telephone industry Registered Jack

A Registered Jack (RJ) is a wiring standard for connecting voice and data equipment to a service provided by a telephone company. In some wiring definitions you will see references to the Local Exchange Carrier (LEC), which is a regulatory term in telecommunications for the local telephone company.

Registration interfaces were created by the Bell System under a 1976 Federal Communications Commission (FCC) order for the standard interconnection between telephone company equipment and customer premises equipment. They were defined in Part 68 of the FCC rules (47 C.F.R. Part 68) governing the direct connection of Terminal Equipment (TE) to the Public Switched Telephone Network (PSTN).

Connectors using the distinction Registered Jack (RJ) describe a standardized telecommunication network interface. The RJ designations only pertain to the wiring of the jack, it is common, but not strictly correct, to refer to an unwired plug by any of these names.

For example, RJ11 is a standardized jack using a 6P2C (6 position 2 contact) modular connectors, commonly used for single line telephone systems. You will often see telephone cables with four wires used for common analog telephone referred to as RJ11 cables. Technically speaking RJ14 is a configuration for two lines using a six-position four-conductor (6P4C) modular jack

RJ45 is a standard jack once specified for modem or data interfaces using a mechanically-keyed variation of the 8P8C (8 position 8 contact) body. Although commonly referred to as an RJ45 in the context of Ethernet and category 5 cables, it is incorrect to refer to a generic 8P8C connector as an RJ45.

Why is a Ethernet eight-pin modular connector (8P8C) not an RJ45?

Both twisted pair cabling used for Ethernet and the telecommunications RJ45 use the 8P8C (Eight Position, Eight Contact) connector, and there lies the confusion and the misuse of the terms. The 8P8C modular connector is often called RJ45 after a telephone industry standard. Although commonly referred to as an RJ45 in the context of Ethernet and Category 5 cables, it is incorrect to refer to a generic 8P8C connector as an RJ45

The 8P8C modular connector is often called RJ45 after a telephone industry standard defined in FCC Part 68. The Ethernet standard is different from the telephone standard, TIA-568 is a set of telecommunications standards from the Telecommunications Industry Association (TIA). Standards T568A and T568B are the pin - pair assignments for eight-conductor 100-ohm balanced twisted pair cabling to 8P8C (8 position 8 contact) modular connectors.

How does a RJ45 to RJ11 converter work?

There is no such thing as a RJ45 to RJ11 converter. They are two different types of connectors for two totally different standards of communication. Cables with various pin configurations and wire pairs are created for specific purposes. Be careful when looking to "convert" on type of wire into another. An adapter that allows you to connect an RJ11 plug into an RJ45 plug is not converting anything.

Technically speaking neither RJ11 or RJ45 is a computer networking standard. Many times when people are looking to convert between RJ11 and RJ45 they are dealing with a device made for a two wire phone line and trying to connect it to an Ethernet eight-pin (8P8C) unshielded twisted-pair (UTP) modular connectors.

I see many questions on internet forums asking about various adapters and converters. Just because you can convert a plug from one type to another does not mean that the signal traveling along the wire will work as you expect. I can not stress enough the importance of not using any type of adapters and converters without knowing the exact wiring configuration of the devices you are trying to connect.

Tags: 

Ethernet computer network cable frequently asked questions answered

ComputerGuru -

You will often hear a common computer network patch cable called an "Ethernet cable." While most modern local area networks (LAN) use the same type of cable, the term Ethernet is a family of computer networking technologies that defines how the information flows through the wire, but does not define the physical network cable.

The standards defining the physical layer of wired Ethernet are known as IEEE 802.3, which is part of a larger set of standards by the Institute of Electrical and Electronics Engineers Standards Association.

Cable types, connector types and cabling topologies are defined by TIA/EIA-568, a set of telecommunications standards from the Telecommunications Industry Association (TIA). The standards address commercial building cabling for telecommunications products and services.

Computer network cabling

Twisted Pair Cabling is a common form of wiring in which two conductors are wound around each other for the purposes of canceling out electromagnetic interference which can cause crosstalk. The number of twists per meter make up part of the specification for a given type of cable.

The two major types of twisted-pair cabling are unshielded twisted-pair (UTP) and shielded twisted-pair (STP). In shielded twisted-pair (STP) the inner wires are encased in a sheath of foil or braided wire mesh. Unshielded twisted pair (UTP) cable is the most common cable used in modern computer networking.

What does Cat5 Cable mean?

A Category 5 cable (Cat5 cable) is made up of four twisted-pair wires, certified to transmit data up to 100 Mbps. Category 5 cable is used extensively in Ethernet connections in local networks, as well as telephony and other data transmissions.

Cat5 Cable has been the standard for homes and small offices for many years. As technology for twisted pair copper cabling has progressed, successive categories have given buyers more choices. Category 5e and Category 6 cable offer more potential for bandwidth and better potential handling of signal noise or loss. Newer cable types also help to deal with the issue of cross talk or signal bleeding, which can be problems with unshielded twisted pair cabling.

The category 5e specification improves upon the category 5 specification by revising and introducing new specifications to further mitigate the amount of crosstalk.The bandwidth (100 MHz) and physical construction are the same between the two.

The category 6 specification improves upon the category 5e specification by improving frequency response and further reducing crosstalk. The improved performance of Cat 6 provides 250 MHz bandwidth and supports 10GBASE-T (10-Gigabit Ethernet). The Cat 6 cable is fully backward compatible with previous versions, such as the Category 5/5e

Older versions of voice and data cable

Category 1 Traditional UTP telephone cable can transmit voice signals but not data. Most telephone cable installed prior to 1983 is Category 1. Category 2 UTP cable is made up of four twisted-pair wires, certified for transmitting data up to 4 Mbps. Official TIA/EIA-568 standards have only been established for cables of Category 3 ratings or above.

Category 3 was widely used in computer networking in the early 1990s for 10BASE-T. In many common names for Ethernet standards the leading number (10 in 10BASE-T) refers to the transmission speed in Mbit/s. BASE denotes that baseband transmission is used. The T designates twisted pair cable.

Category 4 cable consists of four unshielded twisted-pair (UTP) copper wires used in telephone networks which can transmit voice and data up to 16 Mbit/s. Category 4 cable is not recognized by the current version of the TIA/EIA-568 data cabling standards.

What does Patch Cable mean?

A patch cord, also called a patch cable, is a length of cable with connectors on each end that is used to connect one electronic device to another. In computer networking what people often call an “Ethernet Cable” is Unshielded Twisted-Pair (UTP) patch cable.

What does Straight-Through Cable mean?

A straight-through cable is a standard patch cable used in local area networks. Straight-through cables have the wired pins on one end match on the other end. In other words, pin 1 on one end is connected to pin 1 on the other end, and the order follows the straight through route from pin 1 through pin 8.

What is a Crossover Cable?

A crossover cable is used for the interconnection of two similar devices. It is enabled by reversing the transmission and receiving pins at both ends, so that output from one computer becomes input to the other, and vice versa. The reversing or swapping of cables varies, depending on the different network environments and devices in use.

This type of cable is also sometimes called a and is an alternative to wireless connections where one or more computers access a router through a wireless signal. Use a straight-through cable when connecting a router to a hub, a computer to a switch, or connecting a LAN port to a switch, hub, or computer.

Why do you need a crossover cable?

A traditional port found in a computer NIC (network interface card) is called a media-dependent interface (MDI). A a traditional port found on an Ethernet switch is called a media-dependent interface crossover (MDIX), which reverses the transmit and receive pairs. However, if you want to interconnect two switches, where both switch ports used for the interconnection were MDIX ports, the cable would need to be a crossover cable.

Introduced in 1998, Auto MDI-X made the distinction between uplink and normal ports and manual selector switches on older hubs and switches obsolete. Auto MDI-X automatically detects the required cable connection type and configures the connection appropriately, removing the need for crossover cables.

Gigabit and faster Ethernet links over twisted pair cable use all four cable pairs for simultaneous transmission in both directions. For this reason, there are no dedicated transmit and receive pairs, and consequently, crossover cables are never required.

Tags: 

Installing Linux defining distros which version should you choose

ComputerGuru -

In April 1991, Linus Torvalds, at the time a 21 year old computer science student at the University of Helsinki, Finland, started working on some simple ideas for an operating system. Although the desktop computer market exploded throughout the 1990s, the Linux Operating System remained pretty much the domain of geeks who like to build their own computers. I really believed that more than 20 years later we would have Linux computers in our home as common as Windows or Apple varieties.

The only dent in the domination of Windows or Apple desktop computers in recent years has been the introduction of the Chromebook as a personal computer in 2011. The Chrome operating system is a strange mix of the Linux kernel and using the Google Chrome web browser as a user interface.

The Linux operating system has come a long way since the mid 1990s. From painful experiences with using floppy disks and hunting down hardware drivers, my experiences with installing many distributions of Linux in recent years has been pretty painless.

The Linux kernel

Just as I did with answering the question, "what is the best desktop computer operating system," I am going to generalize a bit here so we don't get too deep into the geek speak. Hopefully the tech purists won't beat me up too much for generalizing. Let's begin with quickly going over the basic definitions.

Think of the Linux kernel as an automobile engine and drive train that was designed by a community. Once the engine and drive train have been developed there are groups that split off and design their own version of an automobile. Each of these automotive design groups have their own community with goals for how they want to use their finished product, some may focus on style and looks, another group may want to focus on being practical and functional. Once the group has a general purpose in mind, they will form an online community where they can share ideas in creating a finished product.

The Linux Distro

Each customized version of Linux that adds additional modules and applications is supported by an online community offering internet downloads as well as support. You will see the question phrased as which Linux distro should you use. Distro is a shortened version of the term distribution. There are many distros of the Linux family all based on the same Linux kernel, the core of the computer operating system. There are geeks who swear by which is the best Linux distro, but in the end it is a matter of what works best for you.

When it comes to comparing the various distributions, I find "the big three" to be very similar, because in reality they are variations of the same family. As of the time of this update, March 2017, based on various statistics the most popular version of Linux is Mint, with Debian coming in second, followed by Ubuntu. Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint is very similar indeed to Ubuntu. Mint was forked off Ubuntu with the goal of providing a familiar desktop graphical user interface.

First answer the question, why are you looking at Linux? Do you have an old computer with an outdated operating system that you are looking to upgrade? Or perhaps you just want to see what all the fuss is about with the "free" alternative to Windows or Apple?

If you simply want to play with Linux and just want to see what all the fuss is about, Mint is a very easy place to start. I have installed Mint on a few old computers with no issues. One of the biggest issues I have experienced with many versions of Linux is the lack of drivers for certain pieces of hardware in some laptop models. There's a few old Dell laptops I moved on from installing Linux because finding drivers for the Wi-Fi was not worth the effort.

Here's a look at various distributions of Linux.

In our previous question on "what is the best desktop computer operating system" we addressed the topic of the "free" alternative to Windows or Apple as we explained Open Source software. Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software, not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer." Even though Linux is open source there are versions that are commercially distributed and supported.

Fedora - Red Hat

Red Hat Commercial Linux, introduced in 1995, was one of the first commercially supported versions of Linux, and entered into the enterprise network environment because of its support. Red Hat Linux has evolved quite a bit over the years as Red Hat Linux merged with the community based Fedora Project in 2003.

Fedora is now the free community supported home version of Red Hat Linux. Fedora ranks slightly behind the other distros we mention here in popularity, Fedora is often at the top of list when it comes to integrating new package versions and technologies into the distribution. Many users in the enterprise environment rave about the stability of Fedora.

SUSE - openSUSE

openSUSE claims to be "the makers' choice for sysadmins, developers and desktop users." You may not find a lot of neighborhood geeks telling you to try openSUSE but it ranks near the top of many charts as far as popularity. SUSE was marketing Linux to the enterprise market in 1992, before Red Hat. Many American geeks are not as familiar with SUSE because it was developed in Germany. I have not had any issues with installing it. You can always download a "live CD" which allows you to run the operating system off of the CD without having to install it

openSUSE is the open source version. SUSE is often used in commercial environments because professional help is available under a support contract through SUSE Linux. Having worked as a Novell Netware systems administrator I was involved with SUSE Linux as the Novell Netware network operating system was coming to the end of its life when Novell bought the SUSE brands and trademarks in 2003. When Novell was purchsed by The Attachmate Group in 2011, SUSE was spun off as an independent business unit. SUSE is geared for the business environment with SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop. Each focuses on packages that fit its specific purpose.

Debian - Ubuntu - Mint

Ubuntu and Mint are Debian-based: their package manager is APT (The Advanced Package Tool) a free software user interface that works with core libraries to handle the installation and removal of software on the Debian Linux distributions. Their packages follow the DEB (Debian) package format.

Ubuntu is often used in commercial environments because professional help is available under a support contract through Canonical, the company behind Ubuntu.

Mint is basically the same OS as Debian or Ubuntu with a different default configuration with a lot of pre-installed applications and a nice looking desktop. Mint was forked off from the Ubuntu community with the goal of providing a familiar desktop Operating System.  If you are looking for something to use as a server Debian or Ubuntu may be a better choice.


What about all the rest?

There are more that 200 different versions of Linux. Once you go beyond the versions mentioned here you are getting into support issues. With each of the three families of Linux we mention here, there is a commercially supported version and a community supported version. Keep in mind, if you are not buying support through one of the commercial versions mentioned here, each of these families have a well established online community for support of the open source version.

Is it time to switch to Linux?

Back in the late 1990s I was taking a community college course on Novell networking and systems administration using Novell Netware. As part of the curriculum we had to write a term paper on a unrelated technology topic, I chose Linux on the desktop. I concluded that I was impressed with Linux as an operating system, but it would not become mainstream desktop operating system until there were hardware companies embracing it and selling home computers with Linux installed. Twenty years later, that really has not happened.

You could make the case that the Google Chromebook is a version of Linux installed and configured along with a computer, but the Google Chromebook has not become a mainstream home computer. If all you want to do is surf the net, interact on social media, and read your email, a Google Chromebook works fine. But beyond that there are many issues.

Hardware drivers and website plugins can be a problem when using any version of Linux. Many manufacturers don't develop Linux device drivers for their hardware, you need to search them out yourself through your LInux community. Using many websites that need Digital Rights Management, like Amazon Video, Netflix, or Sling, getting your streaming to work on Linux can be difficult. Some websites don't understand Linux as an operating system and automatic installs of plugins fail.

I know I said at the beginning of this discussion that in recent years my experinece in installing Linux has been pretty painless, but I have access to name brand hardware on pretty basic computers.  The problem with hardware drivers and browser plug ins keeps improving, but beware it can be an issue at times.  It is still a concern that can turn your Linux experince sour. The biggest problem I have experienced in experimenting with Linux is network card and WiFi drivers in laptop models.

In our last article we discussed why is Microsoft Windows so popular. Whether you love them or hate them, many applications only have a Windows version. There are many websites that offer "open source equivalents” to your favorite applications. Some equivalents work well, others are very buggy. The key to using any open source application is looking at how active is the community that supports them. Be cautious of applications that look cool and work well, but are basically created and supported by a single individual. They can often become unsupported as developer creates an application and moves on without supporting it over time.

Take Linux for a test drive

Look for a live distribution of Linux that allows you to run a full instance of the operating system from either CD, DVD, or USB, without making changes to your current system. Many install downloads will offer you a live test drive of the distro that does not install anything to your hard drive. If everything works well from a live test drive, you can feel a bit more comfortable about doing the "real" install.

Tags: 

Desktop personal computer system basic parts defined

ComputerGuru -

If you are studying personal computers as the beginning of your career in technology, or perhaps you are just trying to understand how things work on your home computer to better deal with problems and upgrades, you can't get away with not knowing some very basic definitions of the components of a desktop personal computer system.

Computer hardware is the collection of physical elements that make up a computer system such as a hard disk drive (HDD), monitor, mouse, keyboard, CD-ROM drive, network card, system board, power supply, case, and video card.

The main system board is sometimes called the motherboard. It is the central printed circuit board (PCB) in and holds many of the crucial components of the system, providing connectors for other peripherals.

The central processing unit (CPU), the brain of a computer system is the main component on the main system board. The CPU carries out the instructions of computer programs, performs the basic arithmetical, logical, and input/output operations of the system.

System boards will have expansion slots, a CPU socket or slot, location for memory cache and RAM, and a keyboard connector. Other components may also be present. A slot is a narrow notch, groove, or opening. A socket is a hollow piece or part into which something fits. Systemboards contain both sockets and slots, which are the points at which devices can be plugged in. A CPU slot is long and narrow while a CPU socket is square.

RAM (Random Access Memory), is the computer's primary storage which holds programming code and data that is being processed by the CPU.

A hard disk drive (HDD) is called secondary storage while memory is called primary storage because programs cannot be executed from secondary storage but must first be moved to primary storage. Basically, the CPU cannot "reach" the program still in secondary storage for execution.

ROM is read-only memory. ROM chips, located on circuit boards, are used to hold programming code that is permanently stored on the chip.

Flash ROM can be reprogrammed whereas regular ROM cannot be. In order to change the programming code of regular ROM, the chip must be replaced. Upgrades to Flash ROM can be downloaded from the Internet.

BIOS stands for basic input-output system. It is used to manage the startup of the computer and ongoing input and output operations of basic components, such as a floppy disk or hard drive.

Computer software is a collection of computer programs and related data that provide the instructions for telling a computer what to do.

System software provides the basic functions for computer usage and helps run the computer hardware. An operating system is a type of software that controls a computers output and input operations, such as saving files and managing memory. Common operating systems are typically Windows based, but personal computers can also use an Apple or Linux based operating system as well.

Application software is computer software designed to perform specific tasks. Common applications include word processing such as OpenOffice.org Writer, a spread sheet such as Microsoft Excel, and business accounting such as Quick Books by Intuit.

What is the difference between a PC (personal computer) and a workstation

In a business environment you may have a computer on your desk that is very similar to the computer you have at home, but there is one major difference, the work computer is managed as part of a LAN (local area network) that contains many other computers. In the next section we define networking terms and go into a bit more detail on the concept of a LAN.

Some definitions will state that a workstation computer is faster and more powerful than a personal computer. Not necessarily. Terms like "faster and more powerful" are pretty ambiguous. The difference is a bit more clear-cut, it is a point of reference in how they are used.

In your home you have a personal computer, it is the center of your personal technology universe. When you open up an application, it is on that computer. When you create a data file, like a Word document, you save it to that computer.

When you open up an application, it may be installed on your local computer, or it may be installed on an application server somewhere on your LAN. When you create a data file on your workstation, like a Word document, you save it to your personal directory on a file server that is on your LAN.

Many years ago when computer systems were expensive, all the work was done on a mainframe, a huge computer surrounded by geeks in a special room. The end users had dumb terminals, meaning there was a keyboard and a monitor at your desk, but the box they attached to on your desk was called a dumb terminal because it did not do any work, it was dumb!

The concept of the workstation is that some of the "work" is done locally at your desktop, but some of the work could also be done on a computer somewhere else, in the case of the LAN, that somewhere else would be a server.

Tags: 

The Data Link Layer of the OSI model

ComputerGuru -

The Data Link Layer is Layer 2 of the seven-layer OSI model of computer networking.  The Data Link layer deals with issues on a single segment of the network.

Layer two of the OSI model is one area where the difference between the theoretical OSI reference model and the implementation of TCP/IP with the competing Department of Defense (DoD) model. As we will discuss with the implementation of TCP/IP there is one lower layer called the network interface layer that encompasses Ethernet.

The IEEE 802 standards map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. Even though we discussed many of these Ethernet terms in discussing the Physical Layer of the OSI model, we also discuss them here in the context of the Data Link Layer.

The IEEE 802 LAN/MAN Standards Committee develops Local Area Network standards and Metropolitan Area Network standards. In February 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). IEEE 802 splits the OSI Data Link Layer into two sub-layers named Logical Link Control (LLC) and Media Access Control (MAC).

The lower sub-layer of the Data Link layer, the Media Access Control (MAC), performs Data Link layer functions related to the Physical layer, such as controlling access and encoding data into a valid signaling format.

The upper sub-layer of the Data Link layer, the Logical Link Control (LLC), performs Data Link layer functions related to the Network layer, such as providing and maintaining the link to the network.

The MAC and LLC sub-layers work in tandem to create a complete frame. The portion of the frame for which LLC is responsible is called a Protocol Data Unit (LLC PDU or PDU).

IEEE 802.2 defines the Logical Link Control (LLC) standard that performs functions in the upper portion of the Data Link layer, such as flow control and management of connection errors.

LLC supports the following three types of connections for transmitting data:
• Unacknowledged connectionless service:does not perform reliability checks or maintain a connection, very fast, most commonly used
• Connection oriented service. Once the connection is established, blocks of data can be transferred between nodes until one of the nodes terminates the connection.
• Acknowledged connectionless service provides a mechanism through which individual frames can be acknowledged

IEEE 802.3 is an extension of the original Ethernet. includes modifications to the classic Ethernet data packet structure.

The Media Access Control (MAC) sub-layer contains methods that logical topologies can use to regulate the timing of data signals and eliminate collisions.

The MAC address concerns a device's actual physical address, which is usually designated by the hardware manufacturer. Every device on the network must have a unique MAC address to ensure proper transmission and reception of data.  MAC communicates with adapter card.

Carrier Sense Multiple Access / Collision Detection is (CSMA/CD) a set of rules determining how network devices respond when two devices attempt to use a data channel simultaneously (called a collision). Standard Ethernet networks use CSMA/CD. This standard enables devices to detect a  collision.

After detecting a collision, a device waits a random delay time and then attempts to re-transmit the message. If the device detects a collision again, it waits twice as long to try to re-transmit the message. This is known as exponential back off.

IEEE 802.5 uses token passing to control access to the medium. IBM Token Ring is essentially a subset of IEEE 802.5.

The IEEE 802.11 specifications are wireless standards that specify an "over-the-air" interface between a wireless client and a base station or access point, as well as among wireless clients. The 802.11 standards can be compared to the IEEE 802.3™ standard for Ethernet for wired LANs. The IEEE 802.11 specifications address both the Physical (PHY) and Media Access Control (MAC) layers and are tailored to resolve compatibility issues between manufacturers of Wireless LAN equipment

The IEEE 802.15 Working Group provides, in the IEEE 802 family, standards for low-complexity and low-power consumption wireless connectivity.

IEEE 802.16 specifications support the development of fixed broadband wireless access systems to enable rapid worldwide deployment of innovative, cost-effective and interoperable multi-vendor broadband wireless access products.

A network interface controller (NIC), also known as a network interface card or network adapter, implements communications using a specific physical layer and data link layer standard such as Ethernet. The 1990s Ethernet network interface controller shown in the photo has a BNC connector (left) and an 8P8C connector (right).

 

Tags: 

Physical Layer Topology in computer networking

ComputerGuru -

A network topology refers to the layout of the transmission medium and devices on a network. As a networking professional for many years I can honestly say about the only time network topology has come up is for certification testing. Here are some basic definitions.

Physical Topology:

Physical topology defines the cable's actual physical configuration (star, bus, mesh, ring, cellular, hybrid).

Bus: Uses a single main bus cable, sometimes called a backbone, to transmit data. Workstations and other network devices tap directly into the backbone by using drop cables that are connected to the backbone.  This topology is an old one and essentially has each of the computers on the network daisy-chained to each other. This type of network is usually peer to peer and uses Thinnet(10base2) cabling. It is configured by connecting a "T-connector" to the network adapter and then connecting cables to the T-connectors on the computers on the right and left. At both ends of the chain the network must be terminated with a 50 ohm impedance terminator.

Advantages: Cheap, simple to set up.
Disadvantages Excess network traffic, a failure may affect many users, Problems are difficult to troubleshoot.

Star: Branches out via drop cables from a central hub (also called a multiport repeater or concentrator) to each workstation. A signal is transmitted from a workstation up the drop cable to the hub. The hub then transmits the signal to other networked workstations.  The star is probably the most commonly used topology today. It uses twisted pair such as 10baseT or 100baseT cabling and requires that all devices are connected to a hub.

Advantages: centralized monitoring, failures do not affect others unless it is the hub, easy to modify.
Disadvantages If the hub fails then everything connected to it is down.

Ring: Connects workstations in a continuous loop. Workstations relay signals around the loop in round-robin fashion.  The ring topology looks the same as the star, except that it uses special hubs and ethernet adapters. The Ring topology is used with Token Ring networks, (a proprietary IBM System).

Advantages: Equal access.
Disadvantages Difficult to troubleshoot, network changes affect many users, failure affects many users.

Mesh: Provides each device with a point-to-point connection to every other device in the network.  Mesh topologies are combinations of the above and are common on very large networks. For example, a star bus network has hubs connected in a row(like a bus network) and has computers connected to each hub.

Cellular: Refers to a geographic area, divided into cells, combining a wireless structure with point-to-point and multipoint design for device attachment.

Logical Topology:

Logical topology defines the network path that a signal follows (ring or bus), regardless of its physical design.

Ring: Generates and sends the signal on a one-way path, usually counterclockwise.

Bus: Generates and sends the signal to all network devices.


LAN Media-Access Methods

Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing.

In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network from device to device.

For CSMA/CD networks, switches segment the network into multiple collision domains
 

Tags: 

The Internet Family of Protocols The TCP/IP protocol suite

ComputerGuru -

The Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks. TCP/IP is not a single protocol, but rather an entire family of protocols.

The network concept of protocols establishes a set of rules for each system to speak the others language in order for them to communicate. Protocols describe both the format that a message must take as well as the way in which messages are exchanged between computers.

Transmission Control Protocol (TCP) and the Internet Protocol (IP), were the first two members of the family to be defined, consider them the parents of the family. Protocol stack describes a layered set of protocols working together to provide a set of network functions. Each protocol/layer services the layer above by using the layer below.


Internet Protocol (IP)

Internet Protocol (IP) envelopes and addresses the data, enables the network to read the envelope and forward the data to its destination and defines how much data can fit in a single packet. IP is responsible for routing of packets between computers.

Internet Protocol (IP) is a connectionless, unreliable datagram protocol, which means that a session is not created before sending data. An IP packet might be lost, delivered out of sequence, duplicated, or delayed. IP does not attempt to recover from these types of errors. The acknowledgment of packets delivered and the recovery of lost packets is the responsibility of a higher-layer protocol, such as TCP.

An IP packet, also known as an IP datagram, consists of an IP header and an IP payload. The IP header contains the following fields for addressing and routing: IP header field, Source IP address of the original source of the IP datagram, and the Destination IP address of the final destination of the IP datagram.

Time-to-Live (TTL) Designates the number of network segments on which the datagram is allowed to travel before being discarded by a router. The TTL is set by the sending host and is used to prevent packets from endlessly circulating on an IP internetwork. When forwarding an IP packet, routers are required to decrease the TTL by at least 1.

Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) breaks data up into packets that the network can handle efficiently, verifies that all the packets arrive at their destination, and reassembles the data. TCP is based on point-to-point communication between two network hosts. TCP receives data from programs and processes this data as a stream of bytes. Bytes are grouped into segments that TCP then numbers and sequences for delivery.

Transmission Control Protocol (TCP) is connection oriented, which means an acknowledgment (ACK) verifies that the host has received each segment of the message, reliable delivery service. Acknowledgments are sent by receiving computer, unacknowledged packets are resent. Sequence number are used with acknowledgments to track successful packet transfer

Before two TCP hosts can exchange data, they must first establish a session with each other. A TCP session is initialized through a process known as a three-way handshake. This process synchronizes sequence numbers and provides control information that is needed to establish a virtual connection between both hosts.

Once the initial three-way handshake completes, segments are sent and acknowledged in a sequential manner between both the sending and receiving hosts. A similar handshake process is used by TCP before closing a connection to verify that both hosts are finished sending and receiving all data.

TCP ports use a specific program port for delivery of data sent by using Transmission Control Protocol (TCP). TCP ports are more complex and operate differently from UDP ports.

While a UDP port operates as a single message queue and the network endpoint for UDP-based communication, the final endpoint for all TCP communication is a unique connection. Each TCP connection is uniquely identified by dual endpoints.

Comparison between the OSI and TCP/IP Models

TCP/IP Model Layer 4. Application Layer

Application layer is the top most layer of four layer TCP/IP model. Application layer is present on the top of the Transport layer. Application layer defines TCP/IP application protocols and how host programs interface with Transport layer services to use the network.

Application layer includes all the higher-level protocols:

  • DNS (Domain Naming System)
  • HTTP (Hypertext Transfer Protocol) is the protocol used to transport web pages.
  • FTP (File Transfer Protocol) used to upload and download files.
  • TFTP (Trivial File Transfer Protocol) used to upload and download files.
  • SNMP (Simple Network Management Protocol) designed to enable the analysis and troubleshooting of network hardware. For example, SNMP enables you to monitor workstations, servers, minicomputers, and mainframes, as well as connectivity devices such as bridges, routers, gateways, and wiring concentrators.
  • SMTP (Simple Mail Transfer Protocol) used for transferring email across the internet
  • DHCP (Dynamic Host Configuration Protocol) used to centrally administer the assignment of IP addresses, as well as other configuration information such as subnet masks and the address of the default gateway. When you use DHCP on a TCP/IP network, IP addresses are assigned to clients dynamically instead of manually.
  • X Windows, Telnet, SSH, RDP (Remote Desktop Protocol)
     

TCP/IP Model Layer 3. Transport Layer

Transport Layer is the third layer of the four layer TCP/IP model. The position of the Transport layer is between Application layer and Internet layer. The purpose of Transport layer is to permit devices on the source and destination hosts to carry on a conversation. Transport layer defines the level of service and status of the connection used when transporting data.

The main protocols included at Transport layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

TCP/IP Model Layer 2. Internet Layer

Internet Layer is the second layer of the four layer TCP/IP model. The position of Internet layer is between Network Access Layer and Transport layer. Internet layer pack data into data packets known as IP datagrams, which contain source and destination address (logical address or IP address) information that is used to forward the datagrams between hosts and across networks. The Internet layer is also responsible for routing of IP datagrams.

Packet switching network depends upon a connectionless internetwork layer. This layer is known as Internet layer. Its job is to allow hosts to insert packets into any network and have them to deliver independently to the destination. At the destination side data packets may appear in a different order than they were sent. It is the job of the higher layers to rearrange them in order to deliver them to proper network applications operating at the Application layer.

The main protocols included at Internet layer are IP (Internet Protocol), ICMP (Internet Control Message Protocol), ARP (Address Resolution Protocol), RARP (Reverse Address Resolution Protocol) and IGMP (Internet Group Management Protocol).

Reverse Address Resolution Protocol (RARP) adapted from the ARP protocol and provides reverse functionality. It determines a software address from a hardware (or MAC) address. A diskless workstation uses this protocol during bootup to determine its IP address.

Address Resolution Protocol (ARP) translates a host's software address to a hardware (or MAC) address (the node address that is set on the network interface card).

Internet Control Message Protocol (ICMP) enables systems on a TCP/IP network to share status and error information such as with the use of PING and TRACERT utilities.

TCP/IP Model Layer 1. Network Access Layer

Network Access Layer is the first layer of the four layer TCP/IP model. Network Access Layer defines details of how data is physically sent through the network, including how bits are electrically or optically signaled by hardware devices that interface directly with a network medium, such as coaxial cable, optical fiber, or twisted pair copper wire.

The protocols included in Network Access Layer are Ethernet, Token Ring, FDDI, X.25, Frame Relay etc.

The most popular LAN architecture among those listed above is Ethernet. Ethernet uses an Access Method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection) to access the media, when Ethernet operates in a shared media. An Access Method determines how a host will place data on the medium.

IN CSMA/CD Access Method, every host has equal access to the medium and can place data on the wire when the wire is free from network traffic. When a host wants to place data on the wire, it will check the wire to find whether another host is already using the medium. If there is traffic already in the medium, the host will wait and if there is no traffic, it will place the data in the medium. But, if two systems place data on the medium at the same instance, they will collide with each other, destroying the data. If the data is destroyed during transmission, the data will need to be retransmitted. After collision, each host will wait for a small interval of time and again the data will be retransmitted.
 

Tags: 

The Physical Layer of the OSI model

ComputerGuru -

The Physical Layer consists of the basic hardware transmission technologies of a network sometime referred to as the physical media. Physical media provides the electro-mechanical interface through which data moves among devices on the network.

Initially physical media is thought of as some sort of wire. As technology progresses the types of media grows.

Bounded media transmits signals by sending electricity or light over a cable. Unbounded media transmits data without the benefit of a conduit-it might transmit data through open air, water, or even a vacuum. Simply put, media is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Data communications definitions:

Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), connections run over the standard copper phone lines found in most homes

Integrated Services Digital Network (ISDN) uses a single wire or fiber optic line to carry voice, data, and video signals.

In the early days of connecting your computer to the internet most folks had Public Switched Telephone Network (PSTN), also referred to as Plain Old Telephone Service (POTS), and all connections were run over the standard copper phone lines. In order for the digital world of computers to talk over analog phone lines you needed to use a MODEM.

The term MODEM comes from the words modulator and demodulator, it is a device that modulates a carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data.

Modem standards, or V dot modem standards, are defined by the ITU (International Telecommunications Union). The FCC has limited the speed of analog transmissions to 53 Kbps

Basic Rate Interface (BRI) is most commonly used in residential ISDN connections. It's composed of two bearer (B) channels at 64 Kbps each for a total of 128 Kbps (used for voice and data) and one delta (D) channel at 16 Kbps (used for controlling the B channels and signal transmission). The total bandwidth is up to 144 Kbps.

Primary Rate Interface (PRI) is most commonly used between a PBX (Private Branch Exchange) at the customer's site and the central office of the phone company. It is composed of 23 B channels at 64 Kbps and one D channel at 64 Kbps. The total bandwidth is up to 1,536 Kbps.

Digital Subscriber Line (DSL) technologies use existing, regular copper phone lines to transmit data. DSL hardware can transmit data using three channels over the same wire. In a typical set up, a user connected through a DSL hookup can send data at 640 Kbps, receive data at 1.5 Mbps, and still carry on a standard phone conversion over one line.

T-Carrier Technology is a digital transmission service used to create point-to-point private networks and to establish direct connections to Internet Service Providers. It uses four wires, one pair to transmit and another to receive.

T-1 lines support data transfer at rates of 1.544 megabits per second. Each T-1 line contains 24 channels. The E1 line is the European counterpart that transmits data at 2.048 Mbps.

T-3 has 672 (64 Kbps) channels, for a total data rate of 44.736 Mbps. The E3 line is the European counterpart that transmits data at 34.368 Mbps.

Cable connections provide access to the Internet through the same coaxial cable that brings cable TV into your home. A signal splitter installed by the cable company isolates the Internet signals from the TV signals. The two-way cable connection is always available and can be very fast. Speeds up to 30 Mbps are claimed to be possible, although speeds in the 1 to 2 Mbps range are more typical.

The Physical Layer Ethernet Specifications

Ethernet is a family of computer networking technologies for local area (LAN) and larger networks originally developed at Xerox PARC in the 1970s. Robert Metcalfe, one of the inventors of Ethernet, left Xerox PARC in 1979 to create 3Com Corporation to focus on deploying Ethernet technology.

In 1980, the Institute of Electrical and Electronics Engineers (IEEE) started project 802 to standardize local area networks (LAN). The IEEE 802 standards map to the lower two layers (Data Link and Physical) of the seven-layer OSI networking reference model. IEEE 802.3 is a working group and a collection of IEEE standards focusing on wired Ethernet.

Twisted-pair Ethernet cable has the following specifications: a maximum of 1,024 attached workstations, a maximum of 4 repeaters between communicating workstations, a maximum segment length of 328 feet (100 meters).

100BASE-TX specification uses two pairs of Category 5 UTP or Category 1 STP cabling at a 100 Mbps data transmission speed. Each segment can be up to 100 meters long.

100BASE-T4 specification uses four pairs of Category 3, 4, or 5 UTP cabling at a 100 Mbps data transmission speed with standard RJ-45 connectors. Each segment can be up to 100 meters long.

Fiber optic cable (IEEE 802.8) in which the center core, a glass cladding composed of varying layers of reflective glass, refracts light back into the core. Max length is 25 kilometers, speed is up to 2Gbps but very expensive. Best used for a backbone due to cost.

100BASE-FX specification uses two-strand 62.5/125 micron multi- or single-mode fiber media. Half-duplex, multi-mode fiber media has a maximum segment length of 412 meters. Full-duplex, single-mode fiber media has a maximum segment length of 10,000 meters.

Other wired LAN technologies

Ethernet has largely replaced competing wired LAN technologies such as token ring, token bus, and ARCNET.

IEEE standard 802.4 defined Token bus. It was mainly used for industrial applications. Token bus was used by General Motors for their Manufacturing Automation Protocol (MAP) standardization effort. The IEEE 802.4 Working Group is disbanded and the standard has been withdrawn

Token ring was IBM’s protocol of choice, standardized as IEEE 802.5. Introduced by IBM in 1984,Token ring was fairly successful in corporate environments, but gradually lost out to Ethernet.

ARCNET was a very early LAN system, a token-passing bus with a 2.5 Mbit/sec speed, popular in the 1980s.

Wireless standards

The standards defining the physical layer of wired Ethernet are known as IEEE 802.3, which is part of a larger set of project 802 standards by the Institute of Electrical and Electronics Engineers Standards Association.

IEEE 802.11 WLAN 802.11 and 802.11x refers to a family of specifications developed by the IEEE for wireless LAN (WLAN) technology. 802.11 specifies an over-the-air interface between a wireless client and a base station or between two wireless clients.

IEE 802.15 defines Bluetooth Bluetooth,  a wireless technology standard for exchanging data over short distances (using short-wavelength UHF radio waves in the ISM band from 2.4 to 2.485 GHz[3]) from fixed and mobile devices, and building personal area networks (PANs).

IEEE 802.16 defines WIMAX standards for broadband for wireless metropolitan area networks. officially called WirelessMAN in IEEE, it has been commercialized under the name "WiMAX"

While in your world many of the older technologies of data communications may be replaced with modern one, there are many reasons why you may need to know about them. You may get a better understanding of how things are done on your current network if you understand the evolution of the network.

If your ever work in consulting you may be surprised to find out how much of what you call obsolete is still in use. You will also find questions on older technologies on various certification tests.

Tags: 

What is the difference between the Internet and OSI reference model

ComputerGuru -

When learning computer networking it is essential to have a general idea of the different computer networking reference models and the reasoning behind the layered approach. Both the TCP/IP network model and the OSI Model create a reference model for computer networking. The OSI model is widely used to teach students as was created in the mindset of a reference book. The TCP/IP standards were created to provide guidance to people actually implementing a networking technology and was created in the mindset of a service manual. Much like the answer to the question of why was the internet created, the answer to why do we need the OSI model depends on who you ask. Here at ComputerGuru.net try to explain the basics of the OSI model as it relates to understanding basic computer networking.

The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't get too hung up on drawing direct comparisons between the two models. Our discussion here on the two networking reference models is address some commonly asked questions, and give some historical perspective as to how the models have evolved.


The Open Systems Interconnection Reference Model (OSI Reference Model or OSI Model) was originally created as the basis for designing a universal set of protocols called the OSI Protocol Suite. This suite never achieved widespread success, but the model became a very useful tool for both education and development. The model defines a set of layers and a number of concepts for their use that make understanding networks easier. The theoretical OSI Reference Model is the creation of the European based International Organization for Standardization (ISO), an independent, non-governmental membership organization that creates standards in numerous areas of technology and industry. The OSI model was first published in 1984 as ISO 7498: Information processing systems -- Open Systems Interconnection -- Basic Reference Model.

The Internet model is often compared to the OSI model. This internet model has many names such as the DOD reference model or the ARPANET reference model, because like the internet itself the TCP/IP protocol suite has evolved over the years. The ARPANET was the original name of the network we now call the internet. ARPA, currently known as DARPA, the Defense Advanced Research Projects Agency, is funded by the DoD (Department of Defense).

Unlike the International Standards Organization (ISO) where there is one main library of information that maintains specific standards, the internet is an ever evolving network with many entities working together to maintain standards. There is a collection of documents known as Request for Comments (RFC) maintained by the Internet Engineering Task Force (IETF) that describes various technology specifications.

Simple talk and some needed geek speak

Since TCP/IP is the primary networking language of the internet, everyone who works in the field of technology needs to have at least a simple understanding of how it works and its role in the big picture of the internet. In the spirit of the Guru 42 family of websites, we attempt to tackle the basic understanding using as simple terms as possible.

To understand the role of TCP/IP in the big picture of the internet, we need to delve just a bit into the geek speak of the internet. If you want to learn more, and really delve into how the internet works and the interesting history of the internet, an understanding of IETF and RFC's.

What is an RFC?

The concept of Request for Comments (RFC) documents was started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

In computer network engineering, a Request for Comments (RFC) is a formal document published by the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), and the global community of computer network researchers, to establish Internet standards. The Internet Engineering Task Force (IETF) develops and promotes voluntary Internet standards,

The IETF started out as an activity supported by the U.S. federal government, but since 1993 it has operated as a standards development function under the auspices of the Internet Society, an international membership-based non-profit organization.

Which came first the Internet model or the ISO model?

A question often asked is which network reference model came first. Various sources state that the ground work for the Open Systems Interconnection model (OSI Model) started in the 1970s by a group at Honeywell Information Systems. Other sources point to two projects that began independently in the 1970s to define a unifying standard for the architecture of networking systems. One was administered by the International Organization for Standardization (ISO), and one by the International Telegraph and Telephone Consultative Committee (CCITT).

RFC 871 published in September 1982 is one of the first formal descriptions of the ARPANET Reference Model (ARM). The the introduction of RFC 871 addresses the history of the internet model versus the ISO model.

"Since well before ISO even took an interest in "networking", workers in the ARPA-sponsored research community have been going about their business of doing research and development in intercomputer networking with a particular frame of reference in mind."

Is there an official document that explains the ARPANET Reference Model (ARM)?

RFC 871 was published in September 1982 as a recollection of the past by one of the developers ARPANET Reference Model as the author describes "as a perspective on the ARM." The author points out that the ARPANET Network Working Group (NWG), which was the collective source of the ARM, hasn't had an official general meeting since October 1971.

The four layer internet was defined in Request for Comments 1122 and 1123. RFC 1122, published October 1989, covers the link layer, IP layer, and transport layer, and companion RFC 1123 covers the applications layer and support protocols

The TCP/IP Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. As you read through many of the RFC documents on the IETF protocol development you will see direct statements that they are not concerned with strict layering such as section 3 of RFC 3439 which is titled: "Layering Considered Harmful."

The links below to RFC 1958 and 3439 will help you understand the general mindset of the developers of TCP/IP. RFC 1122 and RFC 1123 are the definitions of the four protocol layers of the TCP/IP model. As the constantly growing library of RFCs illustrates, the concept of the TCP/IP is a ongoing evolution.

References:

Request for Comments (RFC) http://www.ietf.org/rfc.html

Memos in the Requests for Comments (RFC) document series contain technical and organizational notes about the Internet. The Internet Engineering Task Force (IETF) is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet.

RFC 871: September 1982 https://tools.ietf.org/html/rfc871

A perspective on the ARPANET REFERENCE MODEL
Abstract: The paper, by one of its developers, describes the conceptual framework in which the ARPANET intercomputer networking protocol suite, including the DoD standard Transmission Control Protocol (TCP) and Internet Protocol (IP), were designed.

RFC 1122: October 1989 https://tools.ietf.org/html/rfc1122
This RFC covers the communications protocol layers: link layer, IP layer, and transport layer;

RFC 1123: October 1989 https://tools.ietf.org/html/rfc1123
This RFC covers the applications layer and support protocols.

RFC 1958: June 1996 https://tools.ietf.org/html/rfc195
Architectural Principles of the Internet

RFC 3439: December 2002 https://tools.ietf.org/html/rfc3439
Internet Architectural Guidelines
Extends RFC 1958 by outlining some of the philosophical guidelines to which architects and designers of Internet backbone networks should adhere.


Links to learn more:

Check out our site Geek History where we discuss the evolution of the ARPANET and TCP/IP

Why was the internet created: 1957 Sputnik launches ARPA
http://geekhistory.com/content/why-was-internet-created-1957-sputnik-launches-arpa

When was internet invented: J.C.R. Licklider guides 1960s ARPA Vision
http://geekhistory.com/content/when-was-internet-invented-jcr-licklider-guides-1960s-arpa-vision

In the 1960s Paul Baran developed packet switching
http://geekhistory.com/content/1960s-paul-baran-developed-packet-switching

The 1980s internet protocols become universal language of computers
http://geekhistory.com/content/1980s-internet-protocols-become-universal-language-computers

Photo: Interface Message Processor (IMP) ARPANET packet routing

Tags: 

The OSI model explained in simple terms

ComputerGuru -

Learning technology isn't sexy, but I am doing my best to keep it interesting. Here I take on the complex subject of the Computer Networking OSI model explained in simple terms. In our previous article, Understanding the mystical OSI Model explained in simple terms we used an analogy to illustrate the OSI model.

Why is the OSI Reference Model important?

Simply put the OSI Reference Model is a THEORETICAL model describing a standard of computer networking. The TCP/IP Reference model is based on the ACTUAL standards of the internet which are defined in the collection of Request for Comments (RFC) documents started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications.

The OSI model is important because many certification tests use it to determine your understanding of computer networking concepts. The OSI Reference Model is an attempt to create a set of computer networking standards by the International Standards Organization. A "Reference Model" is a set of text book definitions. You often learn something new by first learning text book definitions. The common protocol suite of computer networking is TCP/IP. The geeks who created TCP/IP were not as anal in creating a pretty "reference model." TCP/IP evolved over many years as it went from a theory to the concept of the internet.


The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't spend too much time trying to compare one versus the other. The two models were developed independently of each other to describe the standards of computer networking.

The TCP/IP Reference Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. The TCP/IP Reference Model does NOT always line up neatly against the OSI model. People try to hard to make neat comparisons of one model versus the other when there is not always a neat one to one correlation of each aspect.


The stated purpose of the OSI Model:

  • breaks network communication into smaller, simpler parts that are easier to develop.
  • facilitates standardization of network components to allow multiple-vendor development and support.
  • allows different types of network hardware and software to communicate with each other.
  • prevents changes in one layer from affecting the other layers so that they can develop more quickly.
  • breaks network communication into smaller parts to make learning it easier to understand.


The seven Layers of the OSI Model

The hierarchical layering of protocols on a computer that forms the OSI model is known as a stack. A given layer in a stack sends commands to layers below it and services commands from layers above it.

The seven layers in order from highest to lowest are Application, Presentation, Session, Transport, Network, Data Link, and Physical can be remembered by using the following memory aide: All People Seem To Need Data Processing.

The Application layer includes network software that directly serves the user, providing such things as the user interface and application features. The Application layer is usually made available by using an Application Programmer Interface (API), or hooks, which are made available by the networking vendor.

The Presentation layer translates data to ensure that it is presented properly for the end user, also handles related issues such as data encryption and compression, and how data is structured, as in a database.

The Session layer comes into play primarily at the beginning and end of a transmission. At the beginning of the transmission, it makes known its intent to transmit. At the end of the transmission, the Session layer determines if the transmission was successful. This layer also manages errors that occur in the upper layers, such as a shortage of memory or disk space necessary to complete an operation, or printer errors.

The Transport layer provides the upper layers with a communication channel to the network. The Transport layer collects and reassembles any packets, organizing the segments for delivery and ensuring the reliability of data delivery by detecting and attempting to correct problems that occurred.

The Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

The Data Link layer provides a system through which network devices can share the communication channel. This function is called media-access control (MAC).

The Physical layer provides the electro-mechanical interface through which data moves among devices on the network.

In the articles that follow we will break down each layer in more detail, covering topics you will need to know as a networking professional.
 

Tags: 

Understanding the mystical OSI Model explained in simple terms

ComputerGuru -

As you begin your quest to learn computer networking one of the first tasks you have before you is a basic understanding of the OSI model.

For many folks understanding the OSI model is like trying to understand some mystical formula that controls the way computer networks operate.

As we help you to begin your journey to understanding computer networking We will tackle explaining the complex subject of the computer networking OSI model simple terms in hopes that you will gain an understanding of the reasons behind the definitions

You can find a lot of resources that define the components of the OSI model, but an understanding of the reasons behind the definitions will go a lot way to fully understanding this complex technology model.

The acronym and the organization behind it can get confusing. The formal name for the OSI model is the Open Systems Interconnection model. Open Systems refers to a cooperative effort to have development of hardware and software among many vendors that could be used together. The model is a product of the International Organization for Standardization (2) which is often abbreviated ISO.


The logic behind the OSI model

Before we delve into the OSI model, let us take a moment to understand the organization behind it. You may have seen the term ISO certified in various technology areas. ISO, International Organization for Standardization, (1) is the world's largest developer and publisher of International Standards. ISO helps to manage and create many international standards in many technical areas to insure the same quality of a product or process regardless of location or company.

The OSI (Open Systems Interconnection) model provides a set of general design guidelines for data communications systems and gives a standard way to describe how various layers of data communication systems interact. Applying the logic of the ISO standards to computer networking, a computer component, or computer software needs to comply to set of standards so that the product or process will work no matter where in the world we are, and no matter who is the world is producing it.

Putting the OSI model into perspective

Strive for a good understanding of the intent of the model and a few of the core principles, that will go a long way in an overall understanding of computer networking. Do not focus on the intricate details of the OSI model at first, as the more you read the more confused you may get. The model was created in the 1970s and the technology is ever changing. Many text books will contradict each other on some aspects of the upper layers. Some of the reasoning behind the upper layers are for processes that are not nearly as useful today as they were many years ago, and for that reason many other network models will blend together the upper three layers into a single layer.

Basic definitions of the OSI Model

The seven layers of the OSI Model can be remembered by using the following memory aide: All People Seem To Need Data Processing. As you say the phrase, write down the first letter of each word, and that will help you to remember the seven layers in order from highest to lowest: Application, Presentation, Session, Transport, Network, Data Link, and Physical. We will briefly discuss the lower four layers from the bottom up.

Layer one, the Physical layer provides the path through which data moves among devices on the network.

Layer two, the Data Link layer provides a system through which network devices can share the communication channel.

Layer three, the Network layer's main purpose is to decide which physical path the information should follow from its source to its destination.

Layer four, the Transport layer provides the upper layers with a communication channel to the network.

An analogy to understand the model

Some of reasons behind the OSI model are, to break network communication into smaller, simpler parts that are easier to develop and to facilitate standardization of network components to allow multiple vendor development and support.

Let's take the reasons behind the OSI model and apply them to something totally different to illustrate how they are used. If we wanted to start a railroad and build a new type of train from scratch, and we wanted this train to be able to use existing train tracks, and existing train stations so our new system could get up and running quickly, we would need to understand what existing standards are currently in place.

Even if we never had to build a set of train tracks we would need understand the standards by which train tracks were build and designed so we could assure our train could operate on them, and how the track is shared. Likewise, in order for components to operate, manufactures must understand the track, layer one, and how the track is shared, layer two.

If we are building trains, not train stations, we need to know the size and shape of other vehicles using the tracks so our trains could use the same track as all the other trains. Layer one of the OSI model gives us the path, or the track we use for communication. Layer one, referred to as the media, is the wire, or anything that takes the place of the wire, such as fiber optic, infrared, or radio spectrum technology.

Once you have more than one train on the track, you need to find a way to share the track. Layer two provides a system through which network devices can share the communication channel, or in the case of our analogy, share the track. One of the functions of layer two is called media access control (MAC). If you think about the term media access control you can break it down into the two parts it represents, the media or the track, and access control, or the sharing of the track.

In the OSI model layers one and two represent the the media, or the physical components. Layers three through seven represent the logical, or the software components.

In layer three of the OSI model, the Network later, the logical decision is made to decide which physical path the information should follow from its source to its destination.

In order to continue our analogy to understand this complex set of rules, think of the track system that has already been built as layers one and two. Once this track system is in place we need a system to control the routing of the train system that runs on the tracks. Think of layers three through seven as processes which affect the train itself, which would represent the actual package of information being transported along the tracks. The main purpose of layer three is switching and routing.

Layer four of the OSI model, the transport layer ensures the reliability of data delivery by detecting and attempting to correct problems that occurred. In terms of our analogy, think of this as a set of standards and procedures that allows our train to arrive safely at its destination in a timely manner.

Learning and understanding the OSI model can be confusing.. The goal of this article was not meant to define the layers of the OSI model from purely a technical nature, but to offer an analogy to understand why it is needed and how it used to establish standards for data communications. In our next article we will go over the basic definitions of all the layers of the OSI model.


Sources:
(1) http://www.iso.org/iso/about.htm
(2) http://www.iso.org/iso/home.html

Tags: 

Basic computer networking explained in simple terms

ComputerGuru -

Whether you are a business manager learning the language of technology to better communicate with IT staff, or just beginning your IT career, don't overlook a basic understanding of computer networking.

What is computer networking?

The simplest definition of a computer network is a group of computers that are able to communicate with one another and share a resource. A computer network is a collection of hardware and software that enables a group of devices to communicate and provides users with access to shared resources such as data, files, programs, and operations.

In simplest terms, a computer network is created to share. In teaching computer networking I often commented that if you find someone who didn't like to use the computer network, they probably had a personal issue with the concept of sharing.

We live in a world of data and information. We love to share data and information. All that data and information gets from my house to your house thanks to the concepts of computer networking. We need computer networking to build the vehicles that transport data and information.


Common networking terms

Each device on a network, is called a node. In order for communications to take place, you need the software, the network operating system (NOS) and the means of communication between network computers known as the media.

In computer networking the term media refers to the actual path over which an electrical signal travels as it moves from one component to another. The media can be physical such as a specialized cable or various forms of wireless media such as infrared transmission or radio signals.

A network interface card (NIC) enables two computers to send and receive data over the network media.

What is a protocol?

A Network protocol is a agreed upon set of rules that define how network computers communicate . Different types of computers, using different operating systems, can communicate with each other, and share information as long as they follow the network protocols.

The Internet protocol suite commonly known as TCP/IP is a set of communications protocols used for the Internet and similar networks. You will often see the terms protocol suite or protocol stack used interchangeably. The protocol stack is an implementation of a computer networking protocol suite.

What is a LAN (Local Area Network) versus a WAN (Wide Area Network)?

In a typical LAN (Local Area Network) a group of computers and devices are connected together by a switch, or stack of switches, using a private addressing scheme as defined by the TCP/IP protocol. You may not be familiar with the specific function of a network switch or the definitions of private addressing scheme, they are more advanced topics of computer networking.

Private addresses are unique in relation to other computers on the local network. Routers are found at the boundary of a LAN, connecting them to the larger WAN.

In a WAN (Wide Area Network) you will have multiple LANs connected together using routers. I was taught many years ago that a WAN had nothing to do with the size of a computer network, but was simply connecting multiple LANs together across the public highway system, such as the internet.

People often try to explain concepts like LAN and WAN using terms and descriptions that have nothing to do with the definition. I often see people put numbers of computers into their definitions of LAN and WAN. If you have a three computer LAN than uses the public highway, as in the internet and internet addressing, to connect to another three computer network, the two LANs working together form a WAN.

You may not be familiar with the specific function of a network switch versus a router, or the definitions of private addressing scheme versus a public address, they are more advanced topics of computer networking, but they are the core elements that separate a LAN from a WAN.

What is the client server network model?

In the most common network model, client server, at least one centralized server manages shared resources and security for the other network users and computers. A network connection is only made when information needs to be accessed by a user. This lack of a continuous network connection provides network efficiency.

The client requests services or information from the server computer. The server responds to the client's request by sending the results of the request back to the client computer.

Security and permissions can be managed by administrators which cuts down on security and rights issues when dealing with a large number of workstations. This model allows for convenient backup services, reduces network traffic and provides a host of other services that come with the network operating system.

What are Peer-to-Peer Networks?

Simply sharing resources between computers, such as on a typical home network, every computer acts as both a client and a server. Any computer can share resources with another, and any computer can use the resources of another, given proper access rights.

This is a good solution when there are 10 or less users that are in close proximity to each other, but it is difficult to maintain security as the network grows. This model can be a security nightmare, because each workstation setting permissions for shared resources must be maintained at the workstation, and there is no centralized management. This model is only recommended in situations where security is not an issue.

Other Network Models

Before microcomputers because cost effective dumb terminals were used to access very large main frame computes in remote locations. The local terminal was dumb in the sense that it was nothing more than a way for a keyboard and monitor to access another computer remotely with all the processing occurred on the remote computer. This model, sometimes referred to as a centralized model, is not very common.


The all encompassing footnote

A LAN could use something other than a TCP/IP addressing scheme, but the illustration of a LAN and WAN based network as I describe is a typical implementation

These definitions were written off of the top of my head based on many years of networking experience. Any resemblance to Wiki or any other website is merely coincidental. (Since I am defining basic terms I would hope that they are at least similar!)

Our goal is geek speak made simple. I realize that I may have oversimplified some terms, but the goal here at Computerguru.net to deliver a basic understanding of the concepts in simple terms and not deliver a lecture on computer networking fundamentals to define each term.  I see many answers on various forums that over complicate matters as well as add quite a bit of stray information.

Tags: 

Basic network concepts and the OSI model explained in simple terms

ComputerGuru -

In this chapter of the journey to learn computer networking technology we explain the OSI Reference Model in simple terms, and expand on the different layers of the OSI model.

The OSI model defines the basic building blocks of computer networking, and is an essential part of a complete understanding of modern TCI/IP networks. The theoretical OSI Reference Model is the creation of the European based International Organization for Standardization (ISO), an independent, non-governmental membership organization that creates standards in numerous areas of technology and industry.

Why is the OSI Reference Model important?

An understanding of the concepts of the OSI Reference Model is absolutely necessary for someone learning the role of the Network Administrator or the System Administrator. The OSI model is important because many certification tests use it to determine your understanding of computer networking concepts.

The Open Systems Interconnection Reference Model (OSI Reference Model or OSI Model) was originally created as the basis for designing a universal set of protocols called the OSI Protocol Suite. This suite never achieved widespread success, but the model became a very useful tool for both education and development. The model defines a set of layers and a number of concepts for their use that make understanding networks easier.

The Internet and the TCP/IP family of protocols evolved separately from the OSI model. Often you find teachers, and websites, making direct comparison of the different models. Don't spend too much time trying to compare one versus the other. The two models were developed independently of each other to describe the standards of computer networking.

The TCP/IP Reference Model is not merely a reduced version of the OSI Reference Model with a straight line comparison of the four layers of the TCP/IP model to seven layers of the OSI model. The TCP/IP Reference Model does NOT always line up neatly against the OSI model. People try too hard to make neat comparisons of one model versus the other when there is not always a neat one to one correlation of each aspect.

Simply put the OSI Reference Model is a THEORETICAL model describing a standard of computer networking. The TCP/IP Reference model is based on the ACTUAL standards of the internet which are defined in the collection of Request for Comments (RFC) documents started by Steve Crocker in 1969 to help record unofficial notes on the development of ARPANET. RFCs have since become official documents of Internet specifications, as discussed in the article What is the difference between the Internet and OSI reference model.

To learn more about the evolution of the TCP/IP model check out the Geek History article: The 1980s internet protocols become universal language of computers

If are looking for something less technical that focuses more on using a computer network, rather than understand the core concepts of how it works, please visit our companion website The Guru 42 Universe where we discuss managing technology from the perspective of a business owner or department manager.

Check out the section Business success beyond great ideas and good intentions and specifically the article The System Administrator and successful technology integration.


The role of the Network Administrator or the System Administrator

On a small to mid size network there may be little, if any, distinction between a Systems Administrator and a Network Administrator, and the tasks may all be the responsibility of a single post. As the size of the network grows, the distinction between the areas will become more well defined.

In larger organizations the administrator level technology personnel typically are not the first line of support that works with end users, but rather only work on break and fix issues that could not be resolved at the lower levels.

Network administrators are responsible for making sure computer hardware and the network infrastructure itself is maintained properly. The term network monitoring describes the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator in case of outages via email, pager or other alarms.

The typical Systems Administrator, or sysadmin, leans towards the software and NOS (Network Operating System) side of things. Systems Administrators install software releases, upgrades, and patches, resolve software related problems and performs system backups and recovery.

What is the difference between networking and telecommunications?

In a large organization the distinction of telecommunications and networking can vary depending how the organization is structured. I've worked in smaller companies where anything technology related came under the responsibility of the IT (information technology) department. In larger organizations the roles get a bit more defined and separated. For instance, in a large organization someone specializing in telecommunications may have little or no role in understanding computer servers and network operating systems.

I am answering this from my very personal perspective. I began working in the 1970s in telecommunications. In the military that meant I installed and repaired radio communications and telephone equipment. In the commercial world I had an FCC (Federal Communications License) which allowed me to work on radio communications equipment.

In the 1990s I began working in computer networking, which would be IT (information technology). I see the distinction there as information is data driven. My responsibilities are computer servers and network operating systems. The basic premise of a computer network is to share a resource. The device which allows the resource top be shared is a server. For instance, a print server allows a printer to be shared, a file server allows files to be shared.

In my current position my title includes "telecommunications and networking." My telecommunications responsibilities include telephones. Now with IP (internet protocol) based phones, you have the questions of, is it a phone system problem, or a network problem. The separation was a lot "cleaner" as far as responsibilities before IP based phones. My telecommunications responsibilities also include things like the internal network wiring and dealing with the external issues regarding the connectivity to the building. My networking (IT) responsibilities are the maintenance of the computer servers and the network operating systems that allow resources to be shared.

Tags: 

What is the best desktop computer operating system?

ComputerGuru -

There is no one size fits all answer to " what is the best desktop computer operating system?" Let me first tackle the differences between Linux, Microsoft, and Apple. Hopefully the tech purists won't beat me up too much for generalizing here.

The arguments of which operating system (OS) is best often focuses on the GUI (graphical user interface). Apple focused on being graphical from the start, and Apple focused on a creating single poweruser desktop computer. They have created their own very successful world.

I work in the world of enterprise computers, that's where many computers are talking together, working together, on local area networks (LANs) and wide area networks (WANs). Some might say I have gone over to the dark side and become a Microsoft fan boy. I bashed Microsoft quite a bit over the years for inefficient operating systems. After spending more than 20 years working with Microsoft products in the enterprise environment I have come to appreciate Microsoft and all the technology they have created.

Linux is a Unix-like computer operating system. When I was teaching I always remember a line from a song when I described Unix, "It wasn't build for comfort it was built for speed." Command line functions, the non GUI stuff, is important to the people who use Unix. A lot of Linux, like Unix, is used by people running it on servers, they don't care about the GUI. That's why there are so many distributions of Linux, some are geared to people using it mainly for server based applications, and some Linux Distros focus on a pretty GUI. Distro is a shortened version of the term distribution. We will discuss popular Linux distros in our next article.

The Linux kernel

Let me use the analogy of building an automobile and say that the operating system kernel is like the engine and drive train of the vehicle. Some people argue the case for Linux based on the assumption that the Linux kernel offers the best engine and drive train to power our computer. That depends, the best for what purpose?

The question often comes up as to why doesn't Windows or Apple create services and applications and applications that work with Linux.

From a programming perspective Microsoft has spent billions of dollars creating services and applications that run on their kernel. What incentive would they have to start creating services and applications specific to a Linux kernel?

Apple seems pretty happy pumping out smartphones, some Apple fans are sad that Apple now appears more focused on phones rather than computers. Apple is the most profitable company on the planet. Why would they start creating services and applications specific to a Linux kernel?

You can't make money on Linux?

There are answers that suggest Apple or Microsoft could not make money supporting Linux. Some people don't understand the concept of open source and believe you can't make money by supporting it.

Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer."

As Google has shown with Android you can straddle the fence successfully between supporting an open source operating system while still maintaining a fair amount of proprietary components.

As far as Microsoft supporting Linux, in case you missed it: What do you think of Microsoft joining the Linux Foundation?

"The best GUI"

If we get beyond the argument of why the Linux kernel is the best, the question assumes that we need a Windows or Apple GUI (graphic user interface) to make "make the best OS."

There are many impressive looking GUI’s in the Linux world. Take a look at all the Linux distributions, some distros have focused on the server geeks and server functions, some have focused on looking good with pretty GUIs for the desktop crowd. For instance, Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint was forked off Ubuntu with the goal of providing a familiar desktop GUI.

It's funny how questions on forums often start with "Why is Microsoft Windows so popular?" and then go on to give reasons why it shouldn't be so popular. Microsoft is popular, that is the reality. The reasons of why it shouldn't be so popular are typical perceptions of Linux users looking to stir up a debate.

Desktop computers and personal computers starting entering homes and offices in the 1980s. The world of what we then called "IBM compatible" was driven by computers that were command line operating systems. That meant you had to type in command, short words and ,to get your computer to perform various tasks. People came up with various menus and interfaces, but the desktop was not very graphical.

The mid 1990s was the perfect storm for Microsoft Windows 95. The world was just discovering the internet as online services began connecting to the internet for the first time. Microsoft began marketing Windows 95 as the Graphical User Interface to the desktop computer, and the graphical world wide web with Internet Explorer. Love them or hate them, Microsoft became the dominant desktop computer that people used in their homes, and connected to the web in the 1990s.

It is that Windows has become the predominant desktop computer operating system in the 1990s, in offices, and schools, that people have little reason to use something different at home. In order to get people to change the differences must be totally seamless.

Many Linux fans will say that Linux has become much easier to use, and the interface much more like Microsoft Windows. Many Linux users will call Windows too complicated and that switching over to Linux is easy. That is a matter of perspective. I have been supporting desktop computers for more than 30 years, I know first hand how people hate change. Give any windows user a different operating system and they will call it complicated, because it is different. When Blackberry's went out of style and people were forced to use Apples and Androids, I heard users complain about how they missed how easy their Blackberry was to use. It was easy because that was what they learned on, and now they were forced to change.

I keep hearing about how all the cool Linux distros are faster, sleeker, better, than Windows, but there has yet to be a computer company that has mass produced a desktop computer with a Linux distro. The closest thing to a home use Linux based computer is the Google Chromebook. I have a Linux computer at home, but it is just a web browser and email reader. Sure there are a few games on it as well. But there are too many applications I use at work that I could never bring home because they won't run on a Linux computer.

I am by no means a Microsoft Fanboy. Over the years I have had strong words for how Microsoft has done things, but in recent years I finding myself defending Microsoft because some of the negativity gets pretty silly at times. I am not going to force myself and my family to use a Linux computer just to prove a point. I don't see myself going down that road anytime soon.o

My perspective is also a bit different that the average home user, I am a systems admin. I need to worry about how well multiple computers play together with multiple users. The computer could be use as a toy, or a set of tools, what works best for you depends on what applications you need to do the job. There is no one size fits all answer to which computer should you use.

Tags: 

Common technology questions and basic computer concepts

ComputerGuru -

In this section we are covering common questions and basic computer concepts from the perspective of a typical home user. The first question is obviously, "What computer should I buy?"

Anyone who answers your question "What computer should I buy?" without first asking a few questions back, does not understand the question.

How much computer do you need?

Too often people set out shopping for a computer without first making a list of what they expect the computer to do for them. This is the most common reason for unfulfilled expectations when it comes to technology.

Technology is ever changing, at a very rapid pace. Depending on your level of technical knowledge, expectations of what technology can do will vary widely. Even those who have been around technology for years will sometime make the most common of errors by buying individual devices, without planning how they fit into the total picture. In business today you hear a lot about the thirty thousand foot view. It's all about looking at the total picture, rather than any one thing.

Never lose sight of the fact that technology is just a tool. The finest tools do not turn a novice craftsman into a master. Your financial adviser will tell you the importance of sound financial planning, so if so if you view a computer as a tool to automate your life, it makes sense to plan your technology purchases. Planning involves some work, but all you need to get started is a pencil and paper.

Starting on a piece of paper, write down your thoughts on a few basic questions. What is in it for me, what benefits do you expect from the system? If you could have anything, what would it be? What would you like to have available to you?


What brand to buy? And where do you buy it?

If you think of a computer as a tool, to organize your life, or increase your productivity, then where you buy your computer should be more of an ongoing relationship, rather than a one time occurrence.

The best analogy I ever heard on defining value: if you knew you had to jump out of a plane, where would you buy a parachute? Someone who'd been in the business for awhile might be able to help. I know I'd try to find a place that specialized in parachutes. I know I wouldn't trust buying it from the Cheapo-mart.

Speaking strictly from the viewpoint of Windows computers, I stick with the major name brands like HP and Lenovo. If you sign up for their mailing list on the HP and Lenovo websites they will bombard you with sales, but often have very good deals.

I stay away from the no-name brands, and the low end stuff. I have years of experience on how the cheap stuff doesn't hold up.

Where do you go from here?

In this section we are covering concepts from the perspective of a typical home user. In addressing the question what computer should you buy the topic that enters into the mix is which operating system is the best, so next up we will tackle the question of "what is the best desktop computer operating system?"  If you feel ready to try out the Linux operating system, we will also discuss the various flavors of Linux, and why there are so many different distributions.

On computer basics we will go over the definitions of a computer system hardware and look at the cables and connections you will need for your home network.   If you want to learn more, the sections that follow will go into desktop computer troubleshooting and computer networking concepts.

The section on basic network concepts and the OSI model explained in simple terms is a bit beyond what the average home computer needs to know. This section will be helpful for someone learning computer network and looking at some basic certifications as a network technician.

Many of the articles written for this website were written many years ago for various classes I taught at local community colleges. I periodically go through the site changing things based on common questions I see being asked on online foruns. While this site gets revised from time to time, we purposely try not put anything in here which would age quickly such as current events topics. Many of the basic technology concepts do not age over time as much as you would guess.

If you are not sure what is the best technology choice for you, and you need some ideas, or if you want to keep up to date on hot topics in technology, check out the Guru 42 small business and technology blog where we share our views and comments on the technology news of the day.

Tags: 

Learning basic computer and networking technology

ComputerGuru -

Welcome to the Guru42 Universe. Your journey to learning basic computer and networking technology concepts starts here at ComputerGuru.Net

Learning computer networking can be intimidating, like learning a foreign language, so many similar sounding words and phrases, and acronyms everywhere.

Many people attack their understanding of a computer concept in the context of using a dictionary to find the meaning of a word they don't understand. It would be difficult to learn a foreign language simply by using a dictionary as your tool.

Likewise, it is difficult to learn technology concepts simply by looking up specific definitions. We organize the material in sections that can be read like a chapter in a book by topics, rather than simply a list of definitions like a dictionary.

Since 1998, ComputerGuru.net has attempted to provide self help and tutorials for learning basic computer and networking technology concepts, maintaining the theme, "Geek Speak Made Simple."

We continue to receive positive feedback from all over the world about our technology websites as we attempt to presented material more from a personal "lessons learned" perspective than a text book perspective.

In our latest update and expansion we have added sections on desktop computer troubleshooting and Windows Server based on many questions and notes collected over the years. We hope they help you better use and understand technology in your world.

Who is The Guru?

Tom's career in business and technology started with communications and moved to office automation systems long before the acronym "IT" was widely used. As a field service technician and manager for various office automation companies Tom attended numerous customer service training programs and fine tuned his skills in customer service.

As small business networks evolved, Tom's career expanded as well into the areas of networking and systems administration. Working as a consultant to numerous businesses delivering various technology solutions, Tom gained valuable project management experience.

Tom began actively speaking and writing on both business and technology issues since before the internet was widely used by small business. Exploring PC telecommunications and its role in business lead to Tom's first article for a regional business journal on how the average business could use Computerized Bulletin Board Systems (BBS) as a tool for customer service.

Starting as trainer in the Army National Guard, then as a community college instructor, and now as a webmaster and freelance writer, Tom Peracchio has developed a knack of putting complex topics into simple terms, as he likes to call it, geek speak made simple.

As a community college technology trainer, Tom learned that not everyone taking webmaster classes was there to be a technician or engineer. Many people took the classes to appreciate the topics covered so they could communicate more effectively with the technology folks they had to deal with in their roles as business managers.

Through writing and the Guru 42 Universe websites Tom Peracchio shares his technology experiences and insights with a wide variety of technology users to help them use technology smarter to make their life easier. 

The ComputerGuru is Tom Peracchio: IT support specialist, web developer, writer, and technology trainer

Tags: 

What is the best desktop computer operating system?

ComputerGuru -

There is no one size fits all answer to " what is the best desktop computer operating system?" Let me first tackle the differences between Linux, Microsoft, and Apple. Hopefully the tech purists won't beat me up too much for generalizing here.

The arguments of which operating system (OS) is best often focuses on the GUI (graphical user interface). Apple focused on being graphical from the start, and Apple focused on a creating single poweruser desktop computer. They have created their own very successful world.

I work in the world of enterprise computers, that's where many computers are talking together, working together, on local area networks (LANs) and wide area networks (WANs). Some might say I have gone over to the dark side and become a Microsoft fan boy. I bashed Microsoft quite a bit over the years for inefficient operating systems. After spending more than 20 years working with Microsoft products in the enterprise environment I have come to appreciate Microsoft and all the technology they have created.

Linux is a Unix-like computer operating system. When I was teaching I always remember a line from a song when I described Unix, "It wasn't build for comfort it was built for speed." Command line functions, the non GUI stuff, is important to the people who use Unix. A lot of Linux, like Unix, is used by people running it on servers, they don't care about the GUI. That's why there are so many distributions of Linux, some are geared to people using it mainly for server based applications, and some Linux Distros focus on a pretty GUI. Distro is a shortened version of the term distribution. We will discuss popular Linux distros in our next article.



The Linux kernel

Let me use the analogy of building an automobile and say that the operating system kernel is like the engine and drive train of the vehicle. Some people argue the case for Linux based on the assumption that the Linux kernel offers the best engine and drive train to power our computer. That depends, the best for what purpose?

The question often comes up as to why doesn't Windows or Apple create services and applications and applications that work with Linux.

From a programming perspective Microsoft has spent billions of dollars creating services and applications that run on their kernel. What incentive would they have to start creating services and applications specific to a Linux kernel?

Apple seems pretty happy pumping out smartphones, some Apple fans are sad that Apple now appears more focused on phones rather than computers. Apple is the most profitable company on the planet. Why would they start creating services and applications specific to a Linux kernel?

Who has the best GUI?

If we get beyond the argument of why the Linux kernel is the best, the question assumes that we need a  Windows or Apple graphical user interface (GUI) to make the best operating system.  There are many impressive looking GUI’s in the Linux world. Take a look at all the Linux distributions we describe in our next article. Some distros have focused on the server geeks and server functions, some have focused on looking good with pretty GUIs for the desktop crowd.  For instance, Mint is a fork from Ubuntu, which is itself a fork from Debian. Mint was forked off Ubuntu with the goal of providing a familiar desktop GUI.

You can't make money on Linux

There are answers that suggest Apple or Microsoft could not make money supporting Linux. Some people don't understand the concept of open source and believe you can't make money by supporting it.

Richard Stallman, the father of the Open Source software movement, explains that Open Source refers to the preservation of the freedoms to use, study, distribute and modify that software not zero-cost. In illustrating the concept of Gratis versus Libre, Stallman is famous for using the sentence, "free as in free speech not as in free beer."

As Google has shown with Android you can straddle the fence successfully between supporting an open source operating system while still maintaining a fair amount of proprietary components.

As far as Microsoft supporting Linux, in case you missed it, Microsoft recently joined the Linux Foundation.


Why is Microsoft Windows so popular?


It's funny how questions on forums often start with "Why is Microsoft Windows so popular?" and then go on to give reasons why it shouldn't be so popular. Microsoft is popular, that is the reality. The reasons of why it shouldn't be so popular are typical perceptions of Linux users looking to stir up a debate.

Desktop computers and personal computers starting entering homes and offices in the 1980s. The world of what we then called "IBM compatible" was driven by computers that were command line operating systems. That meant you had to type in command, short words and ,to get your computer to perform various tasks. People came up with various menus and interfaces, but the desktop was not very graphical.

The mid 1990s was the perfect storm for Microsoft Windows 95. The world was just discovering the internet as online services began connecting to the internet for the first time. Microsoft began marketing Windows 95 as the Graphical User Interface to the desktop computer, and the graphical world wide web with Internet Explorer. Love them or hate them, Microsoft became the dominant desktop computer that people used in their homes, and connected to the web in the 1990s.

It is that Windows has become the predominant desktop computer operating system in the 1990s, in offices, and schools, that people have little reason to use something different at home. In order to get people to change the differences must be totally seamless.

Many Linux fans will say that Linux has become much easier to use, and the interface much more like Microsoft Windows. Many Linux users will call Windows too complicated and that switching over to Linux is easy. That is a matter of perspective. I have been supporting desktop computers for more than 30 years, I know first hand how people hate change. Give any windows user a different operating system and they will call it complicated, because it is different. When Blackberry's went out of style and people were forced to use Apples and Androids, I heard users complain about how they missed how easy their Blackberry was to use. It was easy because that was what they learned on, and now they were forced to change.

I keep hearing about how all the cool Linux distros are faster, sleeker, better, than Windows, but there has yet to be a computer company that has mass produced a desktop computer with a Linux distro. The closest thing to a home use Linux based computer is the Google Chromebook. I have a Linux computer at home, but it is just a web browser and email reader. Sure there are a few games on it as well. But there are too many applications I use at work that I could never bring home because they won't run on a Linux computer.

I am by no means a Microsoft Fanboy. Over the years I have had strong words for how Microsoft has done things, but in recent years I finding myself defending Microsoft because some of the negativity gets pretty silly at times. I am not going to force myself and my family to use a Linux computer just to prove a point. I don't see myself going down that road anytime soon.o

My perspective is also a bit different that the average home user, I am a systems admin. I need to worry about how well multiple computers play together with multiple users. The computer could be use as a toy, or a set of tools, what works best for you depends on what applications you need to do the job. There is no one size fits all answer to which computer should you use.

Sorting through the buzzwords and standards

In the OSI architecture "the physical layer" is used to describe the fundamental layer of computer networking. In more general terms the physical layer is the carrier of information between computers using a variety of wired and wireless technologies.

In addition to describing the the physical layer in the section on the theoretical OSI Reference Model, we sorting through the terms, breaking down the definitions and standards into smaller topics as they relate to some commonly asked questions.  The pages on the networking hardware are included in the section on common questions and basic computer concepts.

We approach our goal of geek speak made simple from the perspective of a network engineer relating things to specific technology standards, avoiding technology street slang or common buzzwords that are often incorrectly used.

Check out these related articles in your question to understand technology:

The Physical Layer of the OSI model
 

Save

Pages

Subscribe to Geek History aggregator - Geek News