A rootkit is a collection of computer software malicious, designed to enable access to a computer or an area of its software, not otherwise allowed and masks its existence or the existence of other software. The term rootkit is a concatenation of "root" and the word "kit"; the term "rootkit" has negative connotations through its association with malware. Rootkit installation can be automated, or an attacker can install it after having obtained root or Administrator access. Obtaining this access is a result of direct attack on a system, i.e. exploiting a known vulnerability or a password. Once installed, it becomes possible to hide the intrusion as well as to maintain privileged access; the key is the administrator access. Full control over a system means that existing software can be modified, including software that might otherwise be used to detect or circumvent it. Rootkit detection is difficult because a rootkit may be able to subvert the software, intended to find it. Detection methods include using an alternative and trusted operating system, behavioral-based methods, signature scanning, difference scanning, memory dump analysis.
Removal can be complicated or impossible in cases where the rootkit resides in the kernel. When dealing with firmware rootkits, removal may require hardware replacement, or specialized equipment; the term rootkit or root kit referred to a maliciously modified set of administrative tools for a Unix-like operating system that granted "root" access. If an intruder could replace the standard administrative tools on a system with a rootkit, the intruder could obtain root access over the system whilst concealing these activities from the legitimate system administrator; these first-generation rootkits were trivial to detect by using tools such as Tripwire that had not been compromised to access the same information. Lane Davis and Steven Dake wrote the earliest known rootkit in 1990 for Sun Microsystems' SunOS UNIX operating system. In the lecture he gave upon receiving the Turing award in 1983, Ken Thompson of Bell Labs, one of the creators of Unix, theorized about subverting the C compiler in a Unix distribution and discussed the exploit.
The modified compiler would detect attempts to compile the Unix login command and generate altered code that would accept not only the user's correct password, but an additional "backdoor" password known to the attacker. Additionally, the compiler would detect attempts to compile a new version of the compiler, would insert the same exploits into the new compiler. A review of the source code for the login command or the updated compiler would not reveal any malicious code; this exploit. The first documented computer virus to target the personal computer, discovered in 1986, used cloaking techniques to hide itself: the Brain virus intercepted attempts to read the boot sector, redirected these to elsewhere on the disk, where a copy of the original boot sector was kept. Over time, DOS-virus cloaking methods became more sophisticated, with advanced techniques including the hooking of low-level disk INT 13H BIOS interrupt calls to hide unauthorized modifications to files; the first malicious rootkit for the Windows NT operating system appeared in 1999: a trojan called NTRootkit created by Greg Hoglund.
It was followed by HackerDefender in 2003. The first rootkit targeting Mac OS X appeared in 2009, while the Stuxnet worm was the first to target programmable logic controllers. In 2005, Sony BMG published CDs with copy protection and digital rights management software called Extended Copy Protection, created by software company First 4 Internet; the software included a music player but silently installed a rootkit which limited the user's ability to access the CD. Software engineer Mark Russinovich, who created the rootkit detection tool RootkitRevealer, discovered the rootkit on one of his computers; the ensuing scandal raised the public's awareness of rootkits. To cloak itself, the rootkit hid from the user any file starting with "$sys$". Soon after Russinovich's report, malware appeared which took advantage of that vulnerability of affected systems. One BBC analyst called it a "public relations nightmare." Sony BMG released patches to uninstall the rootkit, but it exposed users to an more serious vulnerability.
The company recalled the CDs. In the United States, a class-action lawsuit was brought against Sony BMG; the Greek wiretapping case of 2004-05 referred to as Greek Watergate, involved the illegal telephone tapping of more than 100 mobile phones on the Vodafone Greece network belonging to members of the Greek government and top-ranking civil servants. The taps began sometime near the beginning of August 2004 and were removed in March 2005 without discovering the identity of the perpetrators; the intruders installed a rootkit targeting Ericsson's AXE telephone exchange. According to IEEE Spectrum, this was "the first time a rootkit has been observed on a special-purpose system, in this case an Ericsson telephone switch." The rootkit was designed to patch the memory of the exchange while it was running, enable wiretapping while disabling audit logs, patch the commands that list active processes and active data blocks, modify the data block checksum verification command. A "backdoor" allowed an operator with sysadmin status to deactivate the exchange's transaction log and access commands r
A firewall is a fire-resistant barrier used to prevent the spread of fire for a prescribed period of time. Firewalls are built between or through buildings, electrical substation transformers, or within an aircraft or vehicle. Firewalls can be used to subdivide a building into separate fire areas and are constructed in accordance with the locally applicable building codes. Firewalls are a portion of a building's passive fire protection systems. Firewalls can be used to separate-high value transformers at an electrical substation in the event of a mineral oil tank rupture and ignition; the firewall serves as a fire containment wall between one oil-filled transformer and other neighboring transformers, building structures, site equipment. There are three main classifications of fire rated walls: fire walls, fire barriers, fire partitions. To the layperson, the common use of language includes all three when referring to a firewall unless distinguishing between them becomes necessary. In addition specialty fire-rated walls such as a High Challenge Fire Wall would require further distinctions.
A firewall is an assembly of materials used to separate transformers, structures, or large buildings to prevent the spread of fire by constructing a wall which extends from the foundation through the roof with a prescribed fire resistance duration and independent structural stability. This allows a building to be subdivided into smaller sections. If a section becomes structurally unstable due to fire or other causes, that section can break or fall away from the other sections in the building. A fire barrier wall, or a fire partition, is a fire-rated wall assembly that provides lower levels of protection than provided by a fire wall; the main differences are. Fire barrier walls are continuous from an exterior wall to an exterior wall, or from a floor below to a floor or roof above, or from one fire barrier wall to another fire barrier wall, having a fire resistance rating equal to or greater than the required rating for the application. Fire barriers are continuous through concealed spaces to the floor deck or roof deck above the barrier.
Fire partitions are not required to extend through concealed spaces if the construction assembly forming the bottom of the concealed space, such as the ceiling, has a fire resistance rating at least equal to or greater than the fire partition. A high challenge fire wall is a wall used to separate transformers, structures, or buildings or a wall subdividing a building with high fire challenge occupancies, having enhanced fire resistance ratings and enhanced appurtenance protection to prevent the spread of fire, having structural stability. Portions of structures that are subdivided by fire walls are permitted to be considered separate buildings, in that fire walls have sufficient structural stability to maintain the integrity of the wall in the event of the collapse of the building construction on either side of the wall. Fire rating - Fire walls are constructed in such a way as to achieve a code-determined fire-resistance rating, thus forming part of a fire compartment's passive fire protection.
Germany includes repeated impact force testing upon new fire wall systems. Other codes require impact resistance on a performance basis Design loads – Fire wall must withstand a minimum 5 lb./sq.ft. and additional seismic loads. Substation transformer firewalls are free-standing modular walls custom designed and engineered to meet application needs. Building firewalls extend through the roof and terminate at a code-determined height above it, they are finished off on the top with flashing for protection against the elements. Building and structural firewalls in North America are made of concrete, concrete blocks, or reinforced concrete. Older fire walls, built prior to World War II, utilized brick materials. Fire barrier walls are constructed of drywall or gypsum board partitions with wood or metal framed studs. Penetrations – Penetrations through fire walls, such as for pipes and cables, must be protected with a listed firestop assembly designed to prevent the spread of fire through wall penetrations.
Penetrations must not defeat the structural integrity of the wall, such that the wall cannot withstand the prescribed fire duration without threat of collapse. Openings – Other openings in Fire walls, such as doors and windows, must be fire-rated fire door assemblies and fire window assemblies. Firewalls are used in varied applications that require specific design and performance specifications. Knowing the potential conditions that may exist during a fire are critical to selecting and installing an effective firewall. For example, a firewall designed to meet National Fire Protection Agency, 221-09 section A.5.7 which indicates an average temperature of 800 °F, is not designed to withstand higher temperatures such as would be present in higher challenge fires, as a result would fail to function for the expected duration of the listed wall rating. Performance based design takes into account the potential conditions during a fire. Understanding thermal limitations of materials is essential to using the correct material for the application.
Laboratory testing is used to simulate fire scenarios and wall loading conditions. The testing results in an assigned listing number for the fire-rated assembly that defines the expected fire resistance duration and wall structural integrity under the tested conditions. Designers may elect to specify a listed fire wall assembly or design a wall system that would require performance testing to certify the expected protections before use of the designed fire-rated wall system. Firewalls are regularly found in ai
Multi-factor authentication is an authentication method in which a computer user is granted access only after presenting two or more pieces of evidence to an authentication mechanism: knowledge and inherence. Two-factor authentication is a subset, of multi-factor authentication, it is a method of confirming users' claimed identities by using a combination of two different factors: 1) something they know, 2) something they have, or 3) something they are. A good example of two-factor authentication is the withdrawing of money from an ATM. Two other examples are to supplement a user-controlled password with a one-time password or code generated or received by an authenticator that only the user possesses. Two-step verification or two-step authentication is a method of confirming a user's claimed identity by utilizing something they know and a second factor other than something they have or something they are. An example of a second step is the user repeating back something, sent to them through an out-of-band mechanism.
Or, the second step might be a six digit number generated by an app, common to the user and the authentication system. The use of multiple authentication factors to prove one's identity is based on the premise that an unauthorized actor is unlikely to be able to supply the factors required for access. If, in an authentication attempt, at least one of the components is missing or supplied incorrectly, the user's identity is not established with sufficient certainty and access to the asset being protected by multi-factor authentication remains blocked; the authentication factors of a multi-factor authentication scheme may include: some physical object in the possession of the user, such as a USB stick with a secret token, a bank card, a key, etc. some secret known to the user, such as a password, PIN, TAN, etc. some physical characteristic of the user, such as a fingerprint, eye iris, typing speed, pattern in key press intervals, etc. Somewhere you are, such as connection to a specific computing network or utilizing a GPS signal to identify the location.
Knowledge factors are the most used form of authentication. In this form, the user is required to prove knowledge of a secret. A password is a secret word or string of characters, used for user authentication; this is the most used mechanism of authentication. Many multi-factor authentication techniques rely on password as one factor of authentication. Variations include both longer ones formed from multiple words and the shorter, purely numeric, personal identification number used for ATM access. Traditionally, passwords are expected to be memorized. Many secret questions such as "Where were you born?" are poor examples of a knowledge factor because they may be known to a wide group of people, or be able to be researched. Possession factors have been used for authentication in the form of a key to a lock; the basic principle is that the key embodies a secret, shared between the lock and the key, the same principle underlies possession factor authentication in computer systems. A security token is an example of a possession factor.
Disconnected tokens have no connections to the client computer. They use a built-in screen to display the generated authentication data, manually typed in by the user. Connected tokens are devices; those devices transmit data automatically. There are a number including card readers, wireless tags and USB tokens. A software token is a type of two-factor authentication security device that may be used to authorize the use of computer services. Software tokens are stored on a general-purpose electronic device such as a desktop computer, laptop, PDA, or mobile phone and can be duplicated. A soft token may not be a device. A certificate loaded onto the device and stored securely may serve this purpose as well; these are factors associated with the user, are biometric methods, including fingerprint, voice, or iris recognition. Behavioral biometrics such as keystroke dynamics can be used. A fourth factor is coming into play involving the physical location of the user. While hard wired to the corporate network, a user could be allowed to login utilizing only a pin code while off the network entering a code from a soft token as well could be required.
This could be seen as an acceptable standard. Systems for network admission control work in similar ways where your level of network access can be contingent on the specific network your device is connected to, such as wifi vs wired connectivity; this allows a user to move between offices and dynamically receive the same level of network access in each. Many multi-factor authentication vendors offer mobile phone-based authentication; some methods include push-based authentication, QR code based authentication, one-time password authentication, SMS-based verification. SMS-based verification suffers from some security co
Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but
Digital Equipment Corporation
Digital Equipment Corporation, using the trademark Digital, was a major American company in the computer industry from the 1950s to the 1990s. DEC was a leading vendor of computer systems, including computers and peripherals, their PDP and successor VAX products were the most successful of all minicomputers in terms of sales. DEC was acquired in June 1998 by Compaq, in what was at that time the largest merger in the history of the computer industry. At the time, Compaq was focused on the enterprise market and had purchased several other large vendors. DEC was a major player overseas. However, Compaq had little idea what to do with its acquisitions, soon found itself in financial difficulty of its own; the company subsequently merged with Hewlett-Packard in May 2002. As of 2007, PDP-11, VAX, AlphaServer systems were still produced under the HP name. From 1957 until 1992, DEC's headquarters were located in a former wool mill in Maynard, Massachusetts. DEC was acquired in June 1998 by Compaq, which subsequently merged with Hewlett-Packard in May 2002.
Some parts of DEC, notably the compiler business and the Hudson, Massachusetts facility, were sold to Intel. Focusing on the small end of the computer market allowed DEC to grow without its potential competitors making serious efforts to compete with them, their PDP series of machines became popular in the 1960s the PDP-8 considered to be the first successful minicomputer. Looking to simplify and update their line, DEC replaced most of their smaller machines with the PDP-11 in 1970 selling over 600,000 units and cementing DEC's position in the industry. Designed as a follow-on to the PDP-11, DEC's VAX-11 series was the first used 32-bit minicomputer, sometimes referred to as "superminis"; these systems were able to compete in many roles with larger mainframe computers, such as the IBM System/370. The VAX was a best-seller, with over 400,000 sold, its sales through the 1980s propelled the company into the second largest computer company in the industry. At its peak, DEC was the second largest employer in Massachusetts, second only to the Massachusetts State Government.
The rapid rise of the business microcomputer in the late 1980s, the introduction of powerful 32-bit systems in the 1990s eroded the value of DEC's systems. DEC's last major attempt to find a space in the changing market was the DEC Alpha 64-bit RISC instruction set architecture. DEC started work on Alpha as a way to re-implement their VAX series, but employed it in a range of high-performance workstations. Although the Alpha processor family met both of these goals, for most of its lifetime, was the fastest processor family on the market high asking prices were outsold by lower priced x86 chips from Intel and clones such as AMD. DEC was acquired in June 1998 by Compaq, in what was at that time the largest merger in the history of the computer industry. At the time, Compaq was focused on the enterprise market and had purchased several other large vendors. DEC was a major player overseas. However, Compaq had little idea what to do with its acquisitions, soon found itself in financial difficulty of its own.
The company subsequently merged with Hewlett-Packard in May 2002. As of 2007, some of DEC's product lines were still produced under the HP name. Beyond DECsystem-10/20, PDP, VAX and Alpha, DEC was well respected for its communication subsystem designs, such as Ethernet, DNA, DSA, its "dumb terminal" subsystems including VT100 and DECserver products. DEC's Research Laboratories conducted DEC's corporate research; some of them are still operated by Hewlett-Packard. The laboratories were: Western Research Laboratory in Palo Alto, California, US Systems Research Center in Palo Alto, California, US Network Systems Laboratory in Palo Alto, California, US Cambridge Research Laboratory in Cambridge, Massachusetts, US Paris Research Laboratory in Paris, France MetroWest Technology Campus in Maynard, Massachusetts, USSome of the former employees of DEC's Research Labs or DEC's R&D in general include: Gordon Bell: technical visionary, VP Engineering 1972–83. DEC supported the ANSI standards the ASCII character set, which survives in Unicode and the ISO 8859 character set family.
DEC's own Multinational Character Set had a large influence on ISO 8859-1 and, by extension, Unicode. The first versions of the C language and the Unix operating system ran on DEC's PDP series of computers, which were among the first commercially viable minicomputers, although for several years DEC itself did not encourage the use of Unix. DEC produced used and influential interactive ope
Information security, sometimes shortened to InfoSec, is the practice of preventing unauthorized access, disclosure, modification, recording or destruction of information. The information or data may take e.g. electronic or physical. Information security's primary focus is the balanced protection of the confidentiality and availability of data while maintaining a focus on efficient policy implementation, all without hampering organization productivity; this is achieved through a multi-step risk management process that identifies assets, threat sources, potential impacts, possible \controls, followed by assessment of the effectiveness of the risk management plan. To standardize this discipline and professionals collaborate and seek to set basic guidance and industry standards on password, antivirus software, encryption software, legal liability and user/administrator training standards; this standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed and transferred.
However, the implementation of any standards and guidance within an entity may have limited effect if a culture of continual improvement isn't adopted. At the core of information security is information assurance, the act of maintaining the confidentiality and availability of information, ensuring that information is not compromised in any way when critical issues arise; these issues include but are not limited to natural disasters, computer/server malfunction, physical theft. While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are being emphasized, with information assurance now being dealt with by information technology security specialists; these specialists apply information security to technology. It is worthwhile to note that a computer does not mean a home desktop. A computer is any device with some memory; such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones and tablet computers.
IT security specialists are always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious cyber attacks that attempt to acquire critical private information or gain control of the internal systems; the field of information security has grown and evolved in recent years. It offers many areas for specialization, including securing networks and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning, electronic record discovery, digital forensics. Information security professionals are stable in their employment; as of 2013 more than 80 percent of professionals had no change in employer or employment over a period of a year, the number of professionals is projected to continuously grow more than 11 percent annually from 2014 to 2019. Information security threats come in many different forms.
Some of the most common threats today are software attacks, theft of intellectual property, identity theft, theft of equipment or information and information extortion. Most people have experienced software attacks of some sort. Viruses, phishing attacks, Trojan horses are a few common examples of software attacks; the theft of intellectual property has been an extensive issue for many businesses in the IT field. Identity theft is the attempt to act as someone else to obtain that person's personal information or to take advantage of their access to vital information. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile, are prone to theft and have become far more desirable as the amount of data capacity increases. Sabotage consists of the destruction of an organization's website in an attempt to cause loss of confidence on the part of its customers. Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as with ransomware.
There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is user carefulness. Governments, corporations, financial institutions and private businesses amass a great deal of confidential information about their employees, products and financial status. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor or a black hat hacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation. From a business perspective, information security must be balanced against cost. For the individual, information security has a significant effect on privacy, viewed differently in various cultures. Possible responses to a security threat or risk are: reduce/mitigate – implement safeguards and countermeasures to eliminate vulnerabilities or block threats assign/transfer – place the cost of the threat onto another entity or organization such as purchasing insurance or outsourcing accept – evaluate if the cost of the countermeasure outweighs the possible cost of loss due to the threat Since the earl
Eavesdropping is the act of secretly or stealthily listening to the private conversation or communications of others without their consent. The practice is regarded as unethical, in many jurisdictions is illegal; the verb eavesdrop is a back-formation from the noun eavesdropper, formed from the related noun eavesdrop. An eavesdropper was someone who would hang from the eave of a building so as to hear what is said within; the PBS documentaries, Inside the Court of Henry VIII and Secrets of Henry VIII’s Palace include segments that display and discuss "eavedrops", carved wooden figures Henry VIII had built into the eaves of Hampton Court to discourage unwanted gossip or dissension from the King's wishes and rule, to foment paranoia and fear, demonstrate that everything said there was being overheard. Eavesdropping vectors include telephone lines, cellular networks and other methods of private instant messaging. VoIP communications software is vulnerable to electronic eavesdropping via infections such as trojans.
Network eavesdropping is a network layer attack that focuses on capturing small packets from the network transmitted by other computers and reading the data content in search of any type of information. This type of network attack is one of the most effective as a lack of encryption services are used, it is linked to the collection of metadata. Those who perform this type of attack are black-hat hackers; the dictionary definition of eavesdropping at Wiktionary Media related to Eavesdropping at Wikimedia Commons