Electrical engineering is a professional engineering discipline that deals with the study and application of electricity and electromagnetism. This field first became an identifiable occupation in the half of the 19th century after commercialization of the electric telegraph, the telephone, electric power distribution and use. Subsequently and recording media made electronics part of daily life; the invention of the transistor, the integrated circuit, brought down the cost of electronics to the point they can be used in any household object. Electrical engineering has now divided into a wide range of fields including electronics, digital computers, computer engineering, power engineering, telecommunications, control systems, radio-frequency engineering, signal processing and microelectronics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations such as hardware engineering, power electronics and waves, microwave engineering, electrochemistry, renewable energies, electrical materials science, much more.
See glossary of electrical and electronics engineering. Electrical engineers hold a degree in electrical engineering or electronic engineering. Practising engineers may be members of a professional body; such bodies include the Institute of Electrical and Electronics Engineers and the Institution of Engineering and Technology. Electrical engineers work in a wide range of industries and the skills required are variable; these range from basic circuit theory to the management skills required of a project manager. The tools and equipment that an individual engineer may need are variable, ranging from a simple voltmeter to a top end analyzer to sophisticated design and manufacturing software. Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, was the first to draw a clear distinction between magnetism and static electricity, he is credited with establishing the term "electricity". He designed the versorium: a device that detects the presence of statically charged objects.
In 1762 Swedish professor Johan Carl Wilcke invented a device named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle, of William Sturgeon who, in 1825 invented the electromagnet, of Joseph Henry and Edward Davy who invented the electrical relay in 1835, of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor, of Michael Faraday, of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism. In 1782 Georges-Louis Le Sage developed and presented in Berlin the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet.
This telegraph connected two rooms. It was an electrostatic telegraph. In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803-1804, he worked on electrical telegraphy and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva’s electrolyte telegraph system was innovative though it was influenced by and based upon two new discoveries made in Europe in 1800 – Alessandro Volta’s electric battery for generating an electric current and William Nicholson and Anthony Carlyle’s electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the 19th century. Practitioners had created a global electric telegraph network and the first professional electrical engineering institutions were founded in the UK and USA to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity.
Over 50 years he joined the new Society of Telegraph Engineers where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, from about 1890, wireless telegraphy. Practical applications and advances in such fields created an increasing need for standardised units of measure, they led to the international standardization of the units volt, coulomb, ohm and henry. This was achieved at an international conference in Chicago in 1893; the publication of these standards formed the basis of future advances in standardisation in various industries, in many countries, the definitions were recognized in relevant legislation. During these years, the study of electricity was considered to be a subfield of physics since the early electrical technology was considered electromechanical in nature; the Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882.
The first electrical engineering degree program was started at Massachusetts Institute of Technology in the physics department
In computing and optical disc recording technologies, an optical disc is a flat circular disc which encodes binary data in the form of pits and lands on a special material on one of its flat surfaces. The encoding material sits atop a thicker substrate which makes up the bulk of the disc and forms a dust defocusing layer; the encoding pattern follows a continuous, spiral path covering the entire disc surface and extending from the innermost track to the outermost track. The data is stored on the disc with a laser or stamping machine, can be accessed when the data path is illuminated with a laser diode in an optical disc drive which spins the disc at speeds of about 200 to 4,000 RPM or more, depending on the drive type, disc format, the distance of the read head from the center of the disc. Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by its grooves; this side of the disc contains the actual data and is coated with a transparent material lacquer.
The reverse side of an optical disc has a printed label, sometimes made of paper but printed or stamped onto the disc itself. Unlike the 3½-inch floppy disk, most optical discs do not have an integrated protective casing and are therefore susceptible to data transfer problems due to scratches and other environmental problems. Optical discs are between 7.6 and 30 cm in diameter, with 12 cm being the most common size. A typical disc is about 1.2 mm thick. An optical disc is designed to support one of three recording types: read-only, recordable, or re-recordable. Write-once optical discs have an organic dye recording layer between the substrate and the reflective layer. Rewritable discs contain an alloy recording layer composed of a phase change material, most AgInSbTe, an alloy of silver, indium and tellurium. Optical discs are most used for storing music, video, or data and programs for personal computers; the Optical Storage Technology Association promotes standardized optical storage formats.
Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage. Libraries and archives enact optical media preservation procedures to ensure continued usability in the computer's optical disc drive or corresponding disc player. For computer data backup and physical data transfer, optical discs such as CDs and DVDs are being replaced with faster, smaller solid-state devices the USB flash drive; this trend is expected to continue as USB flash drives continue to increase in capacity and drop in price. Additionally, music purchased or shared over the Internet has reduced the number of audio CDs sold annually; the first recorded historical use of an optical disc was in 1884 when Alexander Graham Bell, Chichester Bell and Charles Sumner Tainter recorded sound on a glass disc using a beam of light. An early optical disc system existed in 1935, named Lichttonorgel. An early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969.
This form of optical disc was a early form of the DVD. It is of special interest that U. S. Patent 4,893,297, filed 1989, issued 1990, generated royalty income for Pioneer Corporation's DVA until 2007 —then encompassing the CD, DVD, Blu-ray systems. In the early 1960s, the Music Corporation of America bought Gregg's patents and his company, Gauss Electrophysics. American inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil, lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966 and he was granted a patent in 1970. Following litigation and Philips licensed Russell's patents in the 1980s. Both Gregg's and Russell's disc are floppy media read in transparent mode, which imposes serious drawbacks. In the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam U. S. Patent 5,068,846, filed 1972, issued 1991.
Kramer's physical format is used in all optical discs. In 1975, Philips and MCA began to work together, in 1978, commercially much too late, they presented their long-awaited Laserdisc in Atlanta. MCA delivered the Philips the players. However, the presentation was a commercial failure, the cooperation ended. In Japan and the U. S. Pioneer succeeded with the videodisc until the advent of the DVD. In 1979, Philips and Sony, in consortium developed the audio compact disc. In 1979, Exxon STAR Systems in Pasadena, CA built a computer controlled WORM drive that utilized thin film coatings of Tellurium and Selenium on a 12" diameter glass disk; the recording system utilized blue light at red light at 632.8 nm to read. STAR Systems was bought by Storage Technology Corporation in 1981 and moved to Boulder, CO. Development of the WORM technology was continued using 14" diameter aluminum substrates. Beta testing of the disk drives labeled the Laser
Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics describes the behaviour of visible and infrared light; because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays and radio waves exhibit similar properties. Most optical phenomena can be accounted for using the classical electromagnetic description of light. Complete electromagnetic descriptions of light are, however difficult to apply in practice. Practical optics is done using simplified models; the most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics; the ray-based model of light was developed first, followed by the wave model of light.
Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both particle-like properties. Explanation of these effects requires quantum mechanics; when considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields and medicine. Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, telescopes, microscopes and fibre optics. Optics began with the development of lenses by Mesopotamians; the earliest known lenses, made from polished crystal quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses.
These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word ὀπτική, meaning "appearance, look". Greek philosophy on optics broke down into two opposing theories on how vision worked, the "intromission theory" and the "emission theory"; the intro-mission approach saw vision as coming from objects casting off copies of themselves that were captured by the eye. With many propagators including Democritus, Epicurus and their followers, this theory seems to have some contact with modern theories of what vision is, but it remained only speculation lacking any experimental foundation. Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes, he commented on the parity reversal of mirrors in Timaeus. Some hundred years Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics.
He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays from the eye formed a cone, the vertex being within the eye, the base defining the visual field; the rays were sensitive, conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena.
In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses" describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for curved mirrors. In the early 11th century, Alhazen wrote the Book of Optics in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment, he rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and entered the eye, although he was unable to explain how the eye captured the rays. Alhazen's work was ignored in the Arabic world but it was anonymously translated into Latin around 1200 A. D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years. In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, a theology of light, basing it on the works Aristotle and Platonism.
Grosseteste's most famous disciple, Roger Bacon, wrote w
Vacuum is space devoid of matter. The word stems from the Latin adjective vacuus for "vacant" or "void". An approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicists discuss ideal test results that would occur in a perfect vacuum, which they sometimes call "vacuum" or free space, use the term partial vacuum to refer to an actual imperfect vacuum as one might have in a laboratory or in space. In engineering and applied physics on the other hand, vacuum refers to any space in which the pressure is lower than atmospheric pressure; the Latin term in vacuo is used to describe an object, surrounded by a vacuum. The quality of a partial vacuum refers to how it approaches a perfect vacuum. Other things equal, lower gas pressure means higher-quality vacuum. For example, a typical vacuum cleaner produces enough suction to reduce air pressure by around 20%. Much higher-quality vacuums are possible. Ultra-high vacuum chambers, common in chemistry and engineering, operate below one trillionth of atmospheric pressure, can reach around 100 particles/cm3.
Outer space is an higher-quality vacuum, with the equivalent of just a few hydrogen atoms per cubic meter on average in intergalactic space. According to modern understanding if all matter could be removed from a volume, it would still not be "empty" due to vacuum fluctuations, dark energy, transiting gamma rays, cosmic rays and other phenomena in quantum physics. In the study of electromagnetism in the 19th century, vacuum was thought to be filled with a medium called aether. In modern particle physics, the vacuum state is considered the ground state of a field. Vacuum has been a frequent topic of philosophical debate since ancient Greek times, but was not studied empirically until the 17th century. Evangelista Torricelli produced the first laboratory vacuum in 1643, other experimental techniques were developed as a result of his theories of atmospheric pressure. A torricellian vacuum is created by filling a tall glass container closed at one end with mercury, inverting it in a bowl to contain the mercury.
Vacuum became a valuable industrial tool in the 20th century with the introduction of incandescent light bulbs and vacuum tubes, a wide array of vacuum technology has since become available. The recent development of human spaceflight has raised interest in the impact of vacuum on human health, on life forms in general; the word vacuum comes from Latin, meaning'an empty space, void', noun use of neuter of vacuus, meaning "empty", related to vacare, meaning "be empty". Vacuum is one of the few words in the English language that contains two consecutive letters'u'. There has been much dispute over whether such a thing as a vacuum can exist. Ancient Greek philosophers debated the existence of a vacuum, or void, in the context of atomism, which posited void and atom as the fundamental explanatory elements of physics. Following Plato the abstract concept of a featureless void faced considerable skepticism: it could not be apprehended by the senses, it could not, provide additional explanatory power beyond the physical volume with which it was commensurate and, by definition, it was quite nothing at all, which cannot rightly be said to exist.
Aristotle believed that no void could occur because the denser surrounding material continuum would fill any incipient rarity that might give rise to a void. In his Physics, book IV, Aristotle offered numerous arguments against the void: for example, that motion through a medium which offered no impediment could continue ad infinitum, there being no reason that something would come to rest anywhere in particular. Although Lucretius argued for the existence of vacuum in the first century BC and Hero of Alexandria tried unsuccessfully to create an artificial vacuum in the first century AD, it was European scholars such as Roger Bacon, Blasius of Parma and Walter Burley in the 13th and 14th century who focused considerable attention on these issues. Following Stoic physics in this instance, scholars from the 14th century onward departed from the Aristotelian perspective in favor of a supernatural void beyond the confines of the cosmos itself, a conclusion acknowledged by the 17th century, which helped to segregate natural and theological concerns.
Two thousand years after Plato, René Descartes proposed a geometrically based alternative theory of atomism, without the problematic nothing–everything dichotomy of void and atom. Although Descartes agreed with the contemporary position, that a vacuum does not occur in nature, the success of his namesake coordinate system and more implicitly, the spatial–corporeal component of his metaphysics would come to define the philosophically modern notion of empty space as a quantified extension of volume. By the ancient definition however, directional information and magnitude were conceptually distinct. In the medieval Middle Eastern world, the physicist and Islamic scholar, Al-Farabi, conducted a small experiment concerning the existence of vacuum, in which he investigated handheld plungers in water, he concluded that air's volume can expand to fill available space, he suggested that the concept of perfect vacuum was incoherent. However, according to Nader El-Bizri, the physicist Ibn al-Haytham and the Mu'tazili theologians disagreed with Aristotle and Al-Farabi, they supported the existence of a void.
Using geometry, Ibn al-Haytham mathematically demonstrated that place is the imagined three-dimensional void between the inner surfaces of a containing body. According to Ahmad Dallal, Abū Rayhān al-Bīrūnī states that "there is no observable
Frequency is the number of occurrences of a repeating event per unit of time. It is referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency; the period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals, radio waves, light. For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics and radio, frequency is denoted by a Latin letter f or by the Greek letter ν or ν; the relation between the frequency and the period T of a repeating event or oscillation is given by f = 1 T.
The SI derived unit of frequency is the hertz, named after the German physicist Heinrich Hertz. One hertz means. If a TV has a refresh rate of 1 hertz the TV's screen will change its picture once a second. A previous name for this unit was cycles per second; the SI unit for period is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz. As a matter of convenience and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are described by their frequency instead of period; these used conversions are listed below: Angular frequency denoted by the Greek letter ω, is defined as the rate of change of angular displacement, θ, or the rate of change of the phase of a sinusoidal waveform, or as the rate of change of the argument to the sine function: y = sin = sin = sin d θ d t = ω = 2 π f Angular frequency is measured in radians per second but, for discrete-time signals, can be expressed as radians per sampling interval, a dimensionless quantity.
Angular frequency is larger than regular frequency by a factor of 2π. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.: y = sin = sin d θ d x = k Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per meter. In the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has an inverse relationship to the wavelength, λ. In dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: f = v λ. In the special case of electromagnetic waves moving through a vacuum v = c, where c is the speed of light in a vacuum, this expression becomes: f = c λ; when waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement of frequency can done in the following ways, Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period dividing the count by the length of the time period.
For example, if 71 events occur within 15 seconds the frequency is: f = 71 15 s ≈ 4.73 Hz If the number of counts is not large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count; this is called gating error and causes an average error in the calculated frequency of Δ f = 1 2 T
In electromagnetism, absolute permittivity simply called permittivity denoted by the Greek letter ε, is the measure of capacitance, encountered when forming an electric field in a particular medium. More permittivity describes the amount of charge needed to generate one unit of electric flux in a particular medium. Accordingly, a charge will yield more electric flux in a medium with low permittivity than in a medium with high permittivity. Permittivity is the measure of a material's ability to store an electric field in the polarization of the medium; the SI unit for permittivity is farad per meter. The lowest possible permittivity is that of a vacuum. Vacuum permittivity, sometimes called the electric constant, is represented by ε0 and has a value of 8.85×10−12 F/m. The permittivity of a dielectric medium is represented by the ratio of its absolute permittivity to the electric constant; this dimensionless quantity is called the medium’s relative permittivity, sometimes called "permittivity". Relative permittivity is commonly referred to as the dielectric constant, a term, deprecated in physics and engineering as well as in chemistry.
Κ = ε r = ε ε 0 By definition, a perfect vacuum has a relative permittivity of 1. The difference in permittivity between a vacuum and air can be considered negligible, as κair = 1.0006. Relative permittivity is directly related to electric susceptibility, a measure of how a dielectric polarizes in response to an electric field, given by χ = κ − 1 otherwise written as ε = ε r ε 0 = ε 0 The standard SI unit for permittivity is Farad per meter. F m = C V ⋅ m = C 2 N ⋅ m 2 = A 2 ⋅ s 4 kg ⋅ m 3 = N V 2 In electromagnetism, the electric displacement field D represents how an electric field E influences the organization of electric charges in a given medium, including charge migration and electric dipole reorientation, its relation to permittivity in the simple case of linear, isotropic materials with "instantaneous" response to changes in electric field is D = ε E where the permittivity ε is a scalar. If the medium is anisotropic, the permittivity is a second rank tensor. In general, permittivity is not a constant, as it can vary with the position in the medium, the frequency of the field applied, humidity and other parameters.
In a nonlinear medium, the permittivity can depend on the strength of the electric field. Permittivity as a function of frequency can take on complex values. In SI units, permittivity is measured in farads per meter; the displacement field D is measured in units of coulombs per square meter, while the electric field E is measured in volts per meter. D and E describe the interaction between charged objects. D is related to the charge densities associated with this interaction, while E is related to the forces and potential differences; the vacuum permittivity ε0 is the ratio D/E in free space. It appears in the Coulomb force constant, k e = 1 4 π ε 0 Its value is ε 0 = d e f 1 c 0 2 μ 0 = 1 35 950 207 149.472 7056 π F/m ≈ 8.854 187 8176 … × 10 − 12 F/m where c0 is the speed of light in free space, µ0 is the vacuum permeability. The constants c0 and μ0 are defined in SI units to have exact numerical values, shifting responsibility of experiment to the determination of the meter and the ampere; the linear permittivity of a homogeneous material is given relative to that of free space, as a relative permittivity εr (also called dielectric constant, although this term is deprecated and sometimes only refers to the static, zero-frequenc
A transmission medium is a material substance that can propagate energy waves. For example, the transmission medium for sounds is a gas, but solids and liquids may act as a transmission medium for sound; the absence of a material medium in vacuum may constitute a transmission medium for electromagnetic waves such as light and radio waves. While material substance is not required for electromagnetic waves to propagate, such waves are affected by the transmission media they pass through, for instance by absorption or by reflection or refraction at the interfaces between media; the term transmission medium refers to a technical device that employs the material substance to transmit or guide waves. Thus, an optical fiber or a copper cable is a transmission medium. Not only this but is able to guide the transmission of networks. A transmission medium can be classified as a: Linear medium, if different waves at any particular point in the medium can be superposed. Electromagnetic radiation can be transmitted through an optical medium, such as optical fiber, or through twisted pair wires, coaxial cable, or dielectric-slab waveguides.
It may pass through any physical material, transparent to the specific wavelength, such as water, glass, or concrete. Sound is, by definition, the vibration of matter, so it requires a physical medium for transmission, as do other kinds of mechanical waves and heat energy. Science incorporated various aether theories to explain the transmission medium. However, it is now known that electromagnetic waves do not require a physical transmission medium, so can travel through the "vacuum" of free space. Regions of the insulative vacuum can become conductive for electrical conduction through the presence of free electrons, holes, or ions. A physical medium in data communications is the transmission path over. Many transmission media are used as communications channel. For telecommunications purposes in the United States, Federal Standard 1037C, transmission media are classified as one of the following: Guided —waves are guided along a solid medium such as a transmission line. Wireless —transmission and reception are achieved by means of an antenna.
One of the most common physical medias used in networking is copper wire. Copper wire to carry signals to long distances using low amounts of power; the unshielded twisted pair is eight strands of copper wire, organized into four pairs. Another example of a physical medium is optical fiber, which has emerged as the most used transmission medium for long-distance communications. Optical fiber is a thin strand of glass. Four major factors favor optical fiber over copper- data rates, distance and costs. Optical fiber can carry huge amounts of data compared to copper, it can be run for hundreds of miles without the need for signal repeaters, in turn, reducing maintenance costs and improving the reliability of the communication system because repeaters are a common source of network failures. Glass is lighter than copper allowing for less need for specialized heavy-lifting equipment when installing long-distance optical fiber. Optical fiber for indoor applications cost a dollar a foot, the same as copper.
Multimode and single mode are two types of used optical fiber. Multimode fiber uses LEDs as the light source and can carry signals over shorter distances, about 2 kilometers. Single mode can carry signals over distances of tens of miles. Wireless media may carry surface waves or skywaves, either longitudinally or transversely, are so classified. In both communications, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include radio or infrared. Unguided media do not guide them; the term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to unguided media. A transmission may be simplex, full-duplex.
In simplex transmission, signals are transmitted in only one direction. In the half-duplex operation, both stations may only one at a time. In full duplex operation, both stations may transmit simultaneously. In the latter case, the medium is carrying signals in both directions at same time. There are two types of transmission media: guided and unguided. Guided Media: Unshielded Twisted Pair Shielded Twisted Pair Coaxial Cable Optical Fiber hubUnguided Media: Transmission media looking at analysis of using them unguided transmission media is data signals that flow through the air, they are not bound to a channel to follow. Following are unguided media used for data communication: Radio Transmission Microwave Transmission and reception of data is performed in four steps; the data is coded as binary numbers at the sender end A carrie