PacketFence is an open-source network access control system which provides the following features: registration, detection of abnormal network activities, proactive vulnerability scans, isolation of problematic devices, remediation through a captive portal, 802.1X, wireless integration and User-Agent / DHCP fingerprinting. Marcotte, Ludovic. "PacketFence". Linux Journal. Retrieved 2009-03-11. Wallen, Jack. "SolutionBase: Use PacketFence to stop unwanted network traffic". Techrepublic. Retrieved 2015-11-25. Wallen, Jack. "SolutionBase: Installing and configuring Network Access Control with PacketFence". Techrepublic. Retrieved 2015-11-25. Wallen, Jack. "SolutionBase: Administer PacketFence with ease via Web interface". Techrepublic. Retrieved 2015-11-25. Balzard, Regis. "PacketFence Revisited". Linux Journal. Retrieved 2009-03-11. Gehl, Dominik. "PacketFence 1.7 offers client-free open-source NAC". LinuxWorld.com. Retrieved 2009-03-11. Buford, Cory. "Securing your network with PacketFence". Linux.com. Retrieved 2009-03-11.
Bilodeau, Olivier. "PacketFence: Because NAC doesn't have to be hard!". Insecure Magazine. Retrieved 2012-02-23. PacketFence website PacketFence on Sourceforge FLOSS Weekly 155: PacketFence, at the FLOSS Weekly podcast, March 2, 2011, retrieved March 3, 2011
Wi-Fi is technology for radio wireless local area networking of devices based on the IEEE 802.11 standards. Wi‑Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that complete after many years of testing the 802.11 committee interoperability certification testing. Devices that can use Wi-Fi technologies include, among others and laptops, video game consoles and tablets, smart TVs, digital audio players, digital cameras and drones. Wi-Fi compatible devices can connect to the Internet via a wireless access point; such an access point has a range of about 20 meters indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Different versions of Wi-Fi exist, with radio bands and speeds. Wi-Fi most uses the 2.4 gigahertz UHF and 5 gigahertz SHF ISM radio bands. Each channel can be time-shared by multiple networks.
These wavelengths work best for line-of-sight. Many common materials absorb or reflect them, which further restricts range, but can tend to help minimise interference between different networks in crowded environments. At close range, some versions of Wi-Fi, running on suitable hardware, can achieve speeds of over 1 Gbit/s. Anyone within range with a wireless network interface controller can attempt to access a network. Wi-Fi Protected Access is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of WPA have included stronger protections and new security practices as the security landscape has changed over time. In 1971, ALOHAnet connected the Hawaiian Islands with a UHF wireless packet network. ALOHAnet and the ALOHA protocol were early forerunners to Ethernet, the IEEE 802.11 protocols, respectively. A 1985 ruling by the U. S. Federal Communications Commission released the ISM band for unlicensed use.
These frequency bands are the same ones used by equipment such as microwave ovens and are subject to interference. In 1991, NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. The Australian radio-astronomer Dr John O'Sullivan with his colleagues Terence Percival, Graham Daniels, Diet Ostry, John Deane developed a key patent used in Wi-Fi as a by-product of a Commonwealth Scientific and Industrial Research Organisation research project, "a failed experiment to detect exploding mini black holes the size of an atomic particle". Dr O'Sullivan and his colleagues are credited with inventing Wi-Fi. In 1992 and 1996, CSIRO obtained patents for a method used in Wi-Fi to "unsmear" the signal; the first version of the 802.11 protocol was released in 1997, provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, this proved to be popular. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay CSIRO $1 billion for infringements on CSIRO patents; this led to Australia labeling Wi-Fi as an Australian invention, though this has been the subject of some controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012 with global firms in the United States required to pay the CSIRO licensing rights estimated to be worth an additional $1 billion in royalties. In 2016, the wireless local area network Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia; the name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name, "a little catchier than'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance who presided over the selection of the name "Wi-Fi", has stated that Interbrand invented Wi-Fi as a pun on the word hi-fi, a term for high-quality audio technology.
Interbrand created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability; the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created. While inspired by the term hi-fi, the name was never "Wireless Fidelity"; the Wi-Fi Alliance was called the "Wireless Fidelity Alliance Inc" in some publications. Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, LTE; the name is sometimes written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related organization and their website has stated "WiFi is a short name for Wireless Fidelity". To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller; the combination of computer and interface controllers is called a station.
A service set is the set of all the devices associated with a particular Wi-Fi network. The service set can be local, extended or mesh; each service set has an associated identifier, the 32-byte Service Set Identifier, which identifies the partic
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node. A router is connected to two or more data lines from different networks; when a data packet comes in on one of the lines, the router reads the network address information in the packet to determine the ultimate destination. Using information in its routing table or routing policy, it directs the packet to the next network on its journey; the most familiar type of routers are home and small office routers that forward IP packets between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an Internet service provider. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
Though routers are dedicated hardware devices, software-based routers exist. When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol; each router builds up a routing table listing the preferred routes between any two systems on the interconnected networks. A router has two types of network element components organized onto separate planes: Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, through which physical interface connection, it does this using internal preconfigured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table; the control-plane logic strips non-essential directives from the table and builds a forwarding information base to be used by the forwarding plane. Forwarding plane: The router forwards data packets between incoming and outgoing interface connections.
It forwards them to the correct network type using information that the packet header contains matched to entries in the FIB supplied by the control plane. A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission, it can support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' networks; the largest routers may be used in large enterprise networks. Smaller routers provide connectivity for typical home and office networks. All sizes of routers may be found inside enterprises; the most powerful routers are found in ISPs, academic and research facilities.
Large businesses may need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use. Access routers, including small office/home office models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt or DD-WRT. Distribution routers aggregate traffic from multiple access routers. Distribution routers are responsible for enforcing quality of service across a wide area network, so they may have considerable memory installed, multiple WAN interface connections, substantial onboard data processing routines, they may provide connectivity to groups of file servers or other external networks. In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations.
They lack some of the features of edge routers. External networks must be considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, other security functions, or these may be handled by separate devices. Routers commonly perform network address translation which restricts connections initiated from external connections but is not recognised as a security feature by all experts.. Some experts argue that open source routers are more secure and reliable than closed source routers because open source routers allow mistakes to be found and corrected. Routers are often distinguished on the basis of the network in which they operate. A router in a local area network of a single organisation is called an interior router. A router, operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network is called a border router, or gateway router. Routers intended for ISP and major enterprise connectivity exchange routing information using the Border Gateway Protocol.
RFC 4098 standard defines the types of BGP routers according to their functions: Edge router: Also called a provider edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP
Antivirus software, or anti-virus software known as anti-malware, is a computer program used to prevent and remove malware. Antivirus software was developed to detect and remove computer viruses, hence the name. However, with the proliferation of other kinds of malware, antivirus software started to provide protection from other computer threats. In particular, modern antivirus software can protect users from: malicious browser helper objects, browser hijackers, keyloggers, rootkits, trojan horses, malicious LSPs, fraudtools and spyware; some products include protection from other computer threats, such as infected and malicious URLs, spam and phishing attacks, online identity, online banking attacks, social engineering techniques, advanced persistent threat and botnet DDoS attacks. Although the roots of the computer virus date back as early as 1949, when the Hungarian scientist John von Neumann published the "Theory of self-reproducing automata", the first known computer virus appeared in 1971 and was dubbed the "Creeper virus".
This computer virus infected Digital Equipment Corporation's PDP-10 mainframe computers running the TENEX operating system. The Creeper virus was deleted by a program created by Ray Tomlinson and known as "The Reaper"; some people consider "The Reaper" the first antivirus software written – it may be the case, but it is important to note that the Reaper was a virus itself designed to remove the Creeper virus. The Creeper virus was followed by several other viruses; the first known that appeared "in the wild" was "Elk Cloner", in 1981, which infected Apple II computers. In 1983, the term "computer virus" was coined by Fred Cohen in one of the first published academic papers on computer viruses. Cohen used the term "computer virus" to describe a program that: "affect other computer programs by modifying them in such a way as to include a copy of itself." The first IBM PC compatible "in the wild" computer virus, one of the first real widespread infections, was "Brain" in 1986. From the number of viruses has grown exponentially.
Most of the computer viruses written in the early and mid-1980s were limited to self-reproduction and had no specific damage routine built into the code. That changed when more and more programmers became acquainted with computer virus programming and created viruses that manipulated or destroyed data on infected computers. Before internet connectivity was widespread, computer viruses were spread by infected floppy disks. Antivirus software came into use, but was updated infrequently. During this time, virus checkers had to check executable files and the boot sectors of floppy disks and hard disks. However, as internet usage became common, viruses began to spread online. There are competing claims for the innovator of the first antivirus product; the first publicly documented removal of an "in the wild" computer virus was performed by Bernd Fix in 1987. In 1987, Andreas Lüning and Kai Figge, who founded G Data Software in 1985, released their first antivirus product for the Atari ST platform. In 1987, the Ultimate Virus Killer was released.
This was the de facto industry standard virus killer for the Atari ST and Atari Falcon, the last version of, released in April 2004. In 1987, in the United States, John McAfee founded the McAfee company and, at the end of that year, he released the first version of VirusScan. In 1987, Peter Paško, Rudolf Hrubý, Miroslav Trnka created the first version of NOD antivirus. In 1987, Fred Cohen wrote that there is no algorithm that can detect all possible computer viruses. At the end of 1987, the first two heuristic antivirus utilities were released: Flushot Plus by Ross Greenberg and Anti4us by Erwin Lanting. In his O'Reilly book, Malicious Mobile Code: Virus Protection for Windows, Roger Grimes described Flushot Plus as "the first holistic program to fight malicious mobile code."However, the kind of heuristic used by early AV engines was different from those used today. The first product with a heuristic engine resembling modern ones was F-PROT in 1991. Early heuristic engines were based on dividing the binary in different sections: data section, code section.
Indeed, the initial viruses re-organized the layout of the sections, or overrode the initial portion of section in order to jump to the end of the file where malicious code was located—only going back to resume execution of the original code. This was a specific pattern, not used at the time by any legitimate software, which represented an elegant heuristic to catch suspicious code. Other kinds of more advanced heuristics were added, such as suspicious section names, incorrect header size, regular expressions, partial pattern in-memory matching. In 1988, the growth of antivirus companies continued. In Germany, Tjark Auerbach released the first version of AntiVir. In Bulgaria, Dr. Vesselin Bontchev released his first freeware antivirus program. Frans Veldman released the first version of ThunderByte Antivirus known as TBAV. In Czechoslovakia, Pavel Baudiš and Eduard Kučera started avast! (at th
In the fields of physical security and information security, access control is the selective restriction of access to a place or other resource. The act of accessing may mean entering, or using. Permission to access a resource is called authorization. Locks and login credentials are two analogous mechanisms of access control. Geographical access control may be enforced with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense is a system of checking authorized presence, see e.g. Ticket controller. A variant is e.g. of a shop or a country. The term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human, through mechanical means such as locks and keys, or through technological means such as access control systems like the mantrap. Within these environments, physical key management may be employed as a means of further managing and monitoring access to mechanically keyed areas or access to certain small assets.
Physical access control is a matter of who and when. An access control system determines, allowed to enter or exit, where they are allowed to exit or enter, when they are allowed to enter or exit; this was accomplished through keys and locks. When a door is locked, only someone with a key can enter through the door, depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific dates. Mechanical locks and keys do not provide records of the key used on any specific door, the keys can be copied or transferred to an unauthorized person; when a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed. Electronic access control uses computers to solve the limitations of mechanical keys. A wide range of credentials can be used to replace mechanical keys; the electronic access control system grants access based on the credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is recorded.
When access is refused, the door remains locked and the attempted access is recorded. The system will monitor the door and alarm if the door is forced open or held open too long after being unlocked; when a credential is presented to a reader, the reader sends the credential's information a number, to a control panel, a reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, sends a transaction log to a database; when access is denied based on the access control list, the door remains locked. If there is a match between the credential and the access control list, the control panel operates a relay that in turn unlocks the door; the control panel ignores a door open signal to prevent an alarm. The reader provides feedback, such as a flashing red LED for an access denied and a flashing green LED for an access granted; the above description illustrates a single factor transaction. Credentials can be passed around. For example, Alice has access rights to the server room.
Alice either gives Bob her credential. To prevent this, two-factor authentication can be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted. There are three types of authenticating information: something the user knows, e.g. a password, pass-phrase or PIN something the user has, such as smart card or a key fob something the user is, such as fingerprint, verified by biometric measurementPasswords are a common means of verifying a user's identity before access is given to information systems. In addition, a fourth factor of authentication is now recognized: someone you know, whereby another person who knows you can provide a human element of authentication in situations where systems have been set up to allow for such scenarios. For example, a user have forgotten their smart card. In such a scenario, if the user is known to designated cohorts, the cohorts may provide their smart card and password, in combination with the extant factor of the user in question, thus provide two factors for the user with the missing credential, giving three factors overall to allow access.
A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being that enables an individual access to a given physical facility or computer-based information system. Credentials can be something a person knows, something they have, something they are, or some combination of these items; this is known as multi-factor authentication. The typical credential is an access card or key-fob, newer software can turn users' smartphones into access devices. There are many card technologies including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26-bit card-swipe, contact smart cards, contactless smart cards. Available are key-fobs, which are more compact than ID cards, attach to a key ring. Biometric technologies include fingerprint, facial recognition, iris recognition, retinal scan and hand geometry; the built-in biometric technologies found o
A network switch is a computer networking device that connects devices on a computer network by using packet switching to receive and forward data to the destination device. A network switch is a multiport network bridge that uses hardware addresses to process and forward data at the data link layer of the OSI model; some switches can process data at the network layer by additionally incorporating routing functionality. Such switches are known as layer-3 switches or multilayer switches. Switches for Ethernet are the most common form of network switch; the first Ethernet switch was introduced by Kalpana in 1990. Switches exist for other types of networks including Fibre Channel, Asynchronous Transfer Mode, InfiniBand. Unlike less advanced repeater hubs, which broadcast the same data out of each of its ports and let the devices decide what data they need, a network switch forwards data only to the devices that need to receive it. A switch is a device in a computer network. Multiple data cables are plugged into a switch to enable communication between different networked devices.
Switches manage the flow of data across a network by transmitting a received network packet only to the one or more devices for which the packet is intended. Each networked device connected to a switch can be identified by its network address, allowing the switch to direct the flow of traffic maximizing the security and efficiency of the network. A switch is more intelligent than an Ethernet hub, which retransmits packets out of every port of the hub except the port on which the packet was received, unable to distinguish different recipients, achieving an overall lower network efficiency. An Ethernet switch operates at the data link layer of the OSI model to create a separate collision domain for each switch port; each device connected to a switch port can transfer data to any of the other ports at any time and the transmissions will not interfere. Because broadcasts are still being forwarded to all connected devices by the switch, the newly formed network segment continues to be a broadcast domain.
Switches may operate at higher layers of the OSI model, including the network layer and above. A device that operates at these higher layers is known as a multilayer switch. Segmentation involves the use of a switch to split a larger collision domain into smaller ones in order to reduce collision probability, to improve overall network throughput. In the extreme case, each device is located on a dedicated switch port. In contrast to an Ethernet hub, there is a separate collision domain on each of the switch ports; this allows computers to have dedicated bandwidth on point-to-point connections to the network and to run in full-duplex mode. Full-duplex mode has only one transmitter and one receiver per collision domain, making collisions impossible; the network switch plays an integral role in most modern Ethernet local area networks. Mid-to-large sized. Small office/home office applications use a single switch, or an all-purpose device such as a residential gateway to access small office/home broadband services such as DSL or cable Internet.
In most of these cases, the end-user device contains a router and components that interface to the particular physical broadband technology. User devices may include a telephone interface for Voice over IP. Switches are most used as the network connection point for hosts at the edge of a network. In the hierarchical internetworking model and similar network architectures, switches are used deeper in the network to provide connections between the switches at the edge. In switches intended for commercial use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, RapidIO, ATM, ITU-T G.hn and 802.11. This connectivity can be at any of the layers mentioned. While the layer-2 functionality is adequate for bandwidth-shifting within one technology, interconnecting technologies such as Ethernet and token ring is performed more at layer 3 or via routing. Devices that interconnect at the layer 3 are traditionally called routers, so layer 3 switches can be regarded as primitive and specialized routers.
Where there is a need for a great deal of analysis of network performance and security, switches may be connected between WAN routers as places for analytic modules. Some vendors provide firewall, network intrusion detection, performance analysis modules that can plug into switch ports; some of these functions may be on combined modules. Through port mirroring, a switch can create a mirror image of data that can go to an external device such as intrusion detection systems and packet sniffers. A modern switch may implement power over Ethernet, which avoids the need for attached devices, such as a VoIP phone or wireless access point, to have a separate power supply. Since switches can have redundant power circuits connected to uninterruptible power supplies, the connected device can continue operating when regular office power fails. Modern commercial switches use Ethernet interfaces; the core function of an Ethernet switch is to provide a multiport layer 2 bridging function. Many switches perform operations at other layers.
A device capable of more than bridging is known as a multilayer switch. Switches may learn about topologies at many layers and forward at one or more layers. A layer 1 network device transfers data, but does not manage any of the traffic coming through it, an example is Ethernet hub. Any packet entering a port is repeated to the output of eve