Arcserve is a provider of data protection and recovery solutions for enterprise and mid-market businesses. Arcserve was founded in 1983 as Cheyenne Software. Software vendor CA Technologies, known as Computer Associates, acquired Cheyenne in 1996 and continued to develop and market the Arcserve product under the same brand. Arcserve was first developed as a product used to back up other software programs and to ensure that data in the network could not be lost; the major function of the first release was to automatically copy all of the information in the system so that mishaps such as power failures and equipment malfunctions would not destroy or erase it. During the early nineties, Arcserve became Cheyenne's flagship product with massive growth in sales. Cheyenne brought out an improved version of its core program Arcserve in 1993 and began distribution through Original Equipment Manufacturers. During the mid-nineties, Cheyenne continued to release a series of new Arcserve products tailored for different market segments such as Macintosh and Windows users.
Software vendor CA Technologies, known as Computer Associates, acquired Cheyenne in 1996 and continued to develop and market the Arcserve product under the same brand. In August 2014, Arcserve became an independent company when Marlin Equity Partners acquired the business from CA Technologies, they named it Arcserve Unified Data Protection. Arcserve provides some of the largest independent software vendor developed backup software products in the market, supports a wide variety of platforms and applications. Arcserve Unified Data Protection is offered as integrated software, set-and-forget virtual or physical appliances, the Arcserve Cloud. On April 26, 2017, Arcserve announced the acquisition of email archiving technology, now named Arcserve UDP Archiving. On July 11, 2017, Arcserve announced the acquisition of Zetta, a cloud-based disaster recovery solutions provider, by which it now offers direct-to-cloud DRaaS and BaaS with Arcserve UDP Cloud Direct. List of backup software Arcserve official website
Attached Resource Computer NETwork is a communications protocol for local area networks. ARCNET was the first available networking system for microcomputers, it was applied to embedded systems where certain features of the protocol are useful. ARCNET was developed by principal development engineer John Murphy at Datapoint Corporation in 1976 under Victor Poor, announced in 1977, it was developed to connect groups of their Datapoint 2200 terminals to talk to a shared 8" floppy disk system. It was the first loosely coupled LAN-based clustering solution, making no assumptions about the type of computers that would be connected; this was in contrast to contemporary larger and more expensive computer systems such as DECnet or SNA, where a homogeneous group of similar or proprietary computers were connected as a cluster. The token-passing bus protocol of that I/O device-sharing network was subsequently applied to allowing processing nodes to communicate with each other for file-serving and computing scalability purposes.
An application could be developed in DATABUS, Datapoint's proprietary COBOL-like language and deployed on a single computer with dumb terminals. When the number of users outgrew the capacity of the original computer, additional'compute' resource computers could be attached via ARCNET, running the same applications and accessing the same data. If more storage was needed, additional disk resource computers could be attached; this incremental approach broke new ground and by the end of the 1970s over ten thousand ARCNET LAN installations were in commercial use around the world, Datapoint had become a Fortune 500 company. As microcomputers took over the industry, well-proven and reliable ARCNET was offered as an inexpensive LAN for these machines. ARCNET remained proprietary until the early-to-mid 1980s; this did not cause concern at the time. The move to non-proprietary, open systems began as a response to the dominance of International Business Machines and its Systems Network Architecture. In 1979, the Open Systems Interconnection Reference Model was published.
In 1980, Digital and Xerox published an open standard for Ethernet, soon adopted as the basis of standardization by the IEEE and the ISO. IBM responded by proposing Token ring as an alternative to Ethernet but kept such tight control over standardization that competitors were wary of using it. ARCNET was less expensive than either, more reliable, more flexible, by the late 1980s it had a market share about equal to that of Ethernet. Tandy/Radio Shack offered ARCNET as an application and file sharing medium for their TRS-80 Model II, Model 12, Model 16, Tandy 6000, Tandy 2000, Tandy 1000 and Tandy 1200 computer models. There were hooks in the Model 4P's ROM to boot from an ARCNET network; when Ethernet moved from co-axial cable to twisted pair and an "interconnected stars" cabling topology based on active hubs, it became much more attractive. Easier cabling, combined with the greater raw speed of Ethernet helped to increase Ethernet demand, as more companies entered the market the price of Ethernet started to fall—and ARCNET volumes tapered off.
In response to greater bandwidth needs, the challenge of Ethernet, a new standard called ARCnet Plus was developed by Datapoint, introduced in 1992. ARCnet Plus ran at 20 Mbit/s, was backward compatible with original ARCnet equipment. However, by the time ARCnet Plus products were ready for the market, Ethernet had captured the majority of the network market, there was little incentive for users to move back to ARCnet; as a result few ARCnet Plus products were produced. Those that were built by Datapoint, were expensive, hard to find. ARCNET was standardized as ANSI ARCNET 878.1. It appears this was when the name changed from ARCnet to ARCNET. Other companies entered the market, notably Standard Microsystems who produced systems based on a single VLSI chip developed as custom LSI for Datapoint, but made available by Standard Microsystems to other customers. Datapoint found itself in financial trouble and moved into video conferencing and custom programming in the embedded market. Though ARCNET is now used for new general networks, the diminishing installed base still requires support - and it retains a niche in industrial control.
Original ARCNET used RG-62/U coaxial cable of 93 Ω impedance and either passive or active hubs in a star-wired bus topology. At the time of its greatest popularity, this was a significant advantage of ARCNET over Ethernet. A star-wired bus was much easier to build and expand than the clumsy linear bus Ethernet of the time; the "interconnected stars" cabling topology made it easy to add and remove nodes without taking down the whole network, much easier to diagnose and isolate failures within a complex LAN. Another significant advantage ARCNET had over Ethernet was cable distance. ARCNET coax cable runs could extend 610 m between active hubs or between an active hub and an end node, while the RG-58'thin' Ethernet most used at that time was limited to a maximum run of 180 m from end to end. ARCNET had the disadvantage of requiring either an active or passive hub between nodes if there were more than two nodes in the network, while thin Ethernet allowed nodes to be spaced anywhere along the linear coax cable.
However, ARCNET passive hubs were inexpensive, being composed of a simple
Peer-to-peer computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are privileged, equipotent participants in the application, they are said to form a peer-to-peer network of nodes. Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client-server model in which the consumption and supply of resources is divided. Emerging collaborative P2P systems are going beyond the era of peers doing similar things while sharing resources, are looking for diverse peers that can bring in unique resources and capabilities to a virtual community thereby empowering it to engage in greater tasks beyond those that can be accomplished by individual peers, yet that are beneficial to all the peers.
While P2P systems had been used in many application domains, the architecture was popularized by the file sharing system Napster released in 1999. The concept has inspired new philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general. While P2P systems had been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster; the peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, filesystems." The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1. Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor and linking content to form an interlinked "web" of links.
The early Internet was more open than present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. This contrasts to the broadcasting-like structure of the web; as a precursor to the Internet, ARPANET was a successful client-server network where "every participating node could request and serve content." However, ARPANET was not self-organized, it lacked the ability to "provide any means for context or content-based routing beyond'simple' address-based routing."Therefore, USENET, a distributed messaging system, described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system; the basic model is a client-server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers; the same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of e-mail clients and their direct connections is a client-server relationship.
In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network independent from the physical network, without having to obey any administrative authorities or restrictions." A peer-to-peer network is designed around the notion of equal peer nodes functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is to and from a central server. A typical example of a file transfer that uses the client-server model is the File Transfer Protocol service in which the client and server programs are distinct: the clients initiate the transfer, the servers satisfy these requests. Peer-to-peer networks implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network.
Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers are able to communicate with each other directly, via the logical overlay links. Overlays are used for indexing and peer discovery, make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, how resources are indexed and located, we can classify networks as unstructured or structured. Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other.. Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay; because the role of all peers in the network is the same, unstructured networks are robust in the face of high rates of "churn"—that is, when large numbers of peers are joining and leaving the network.
Computerworld is an ongoing decades old professional publication which in 2014 "went digital." Its audience is information technology and business technology professionals, is available via a publication website and as a digital magazine. It is published in many countries around the world under the similar names; each country's version of Computerworld is managed independently. The parent company of Computerworld US is IDG Communications; the first issue was published in 1967. The company IDG offers the brand "Computerworld" in 47 countries worldwide, the name and frequency differ though; when IDG established the Swedish edition in 1983 i.e. the title "Computerworld" was registered in Sweden by another publisher. This is, it is distributed as a morning newspaper in tabloid format in 51,000 copies with an estimated 120,000 readers. From 1999 to 2008, it was published three days a week, but since 2009, it is published only on Tuesdays and Fridays. In June 2014, Computerworld US abandoned its print edition, becoming an digital publication.
In late July 2014, Computerworld debuted the monthly Computerworld Digital Magazine. In 2017, Computerworld celebrated its 50th year in tech publishing with a number of features and stories highlighting the publication's history. Computerworld's website premiered nearly two decades before their last printed issue. Computerworld US serves IT and business management with coverage of information technology, emerging technologies and analysis of technology trends. Computerworld publishes several notable special reports each year, including the 100 Best Places to Work in IT, IT Salary Survey, the DATA+ Editors' Choice Awards and the annual Forecast research report. Computerworld in the past has published stories that highlight the effects of immigration to the U. S. on U. S. software engineers. The executive editor of Computerworld in the U. S. is Ken Mingis, who leads a small staff of editors and freelancers who cover a variety of enterprise IT topics. "Computerworld archive from Google News Archive Search".
A CD-ROM is a pre-pressed optical compact disc that contains data. Computers can read—but not write to or erase—CD-ROMs, i.e. it is a type of read-only memory. During the 1990s, CD-ROMs were popularly used to distribute software and data for computers and fourth generation video game consoles; some CDs, called enhanced CDs, hold both computer data and audio with the latter capable of being played on a CD player, while data is only usable on a computer. The CD-ROM format was developed by Japanese company Denon in 1982, it was an extension of Compact Disc Digital Audio, adapted the format to hold any form of digital data, with a storage capacity of 553 MiB. CD-ROM was introduced by Denon and Sony at a Japanese computer show in 1984; the Yellow Book is the technical standard. One of a set of color-bound books that contain the technical specifications for all CD formats, the Yellow Book, standardized by Sony and Philips in 1983, specifies a format for discs with a maximum capacity of 650 MiB. CD-ROMs are identical in appearance to audio CDs, data are stored and retrieved in a similar manner.
Discs are made from a 1.2 mm thick disc of polycarbonate plastic, with a thin layer of aluminium to make a reflective surface. The most common size of CD-ROM is 120 mm in diameter, though the smaller Mini CD standard with an 80 mm diameter, as well as shaped compact discs in numerous non-standard sizes and molds, are available. Data is stored on the disc as a series of microscopic indentations. A laser is shone onto the reflective surface of the disc to read the pattern of lands; because the depth of the pits is one-quarter to one-sixth of the wavelength of the laser light used to read the disc, the reflected beam's phase is shifted in relation to the incoming beam, causing destructive interference and reducing the reflected beam's intensity. This is converted into binary data. Several formats are used for data stored on compact discs, known as the Rainbow Books; the Yellow Book, published in 1988, defines the specifications for CD-ROMs, standardized in 1989 as the ISO/IEC 10149 / ECMA-130 standard.
The CD-ROM standard builds on top of the original Red Book CD-DA standard for CD audio. Other standards, such as the White Book for Video CDs, further define formats based on the CD-ROM specifications; the Yellow Book itself is not available, but the standards with the corresponding content can be downloaded for free from ISO or ECMA. There are several standards that define how to structure data files on a CD-ROM. ISO 9660 defines the standard file system for a CD-ROM. ISO 13490 is an improvement on this standard which adds support for non-sequential write-once and re-writeable discs such as CD-R and CD-RW, as well as multiple sessions; the ISO 13346 standard was designed to address most of the shortcomings of ISO 9660, a subset of it evolved into the UDF format, adopted for DVDs. The bootable CD specification was issued in January 1995, to make a CD emulate a hard disk or floppy disk, is called El Torito. Data stored on CD-ROMs follows the standard CD data encoding techniques described in the Red Book specification.
This includes cross-interleaved Reed–Solomon coding, eight-to-fourteen modulation, the use of pits and lands for coding the bits into the physical surface of the CD. The structures used to group data on a CD-ROM are derived from the Red Book. Like audio CDs, a CD-ROM sector contains 2,352 bytes of user data, composed of 98 frames, each consisting of 33-bytes. Unlike audio CDs, the data stored in these sectors corresponds to any type of digital data, not audio samples encoded according to the audio CD specification. To structure and protect this data, the CD-ROM standard further defines two sector modes, Mode 1 and Mode 2, which describe two different layouts for the data inside a sector. A track inside a CD-ROM only contains sectors in the same mode, but if multiple tracks are present in a CD-ROM, each track can have its sectors in a different mode from the rest of the tracks, they can coexist with audio CD tracks as well, the case of mixed mode CDs. Both Mode 1 and 2 sectors use the first 16 bytes for header information, but differ in the remaining 2,336 bytes due to the use of error correction bytes.
Unlike an audio CD, a CD-ROM cannot rely on error concealment by interpolation. To achieve improved error correction and detection, Mode 1, used for digital data, adds a 32-bit cyclic redundancy check code for error detection, a third layer of Reed–Solomon error correction using a Reed-Solomon Product-like Code. Mode 1 therefore contains 288 bytes per sector for error detection and correction, leaving 2,048 bytes per sector available for data. Mode 2, more appropriate for image or video data, contains no additional error detection or correction bytes, having therefore 2,336 available data bytes per sector. Note that both modes, like audio CDs, still benefit from the lower layers of error correction at the frame level. Before being stored on a disc with the techniques described above, each CD-ROM sector is scrambled to prevent some problematic patterns from showing up; these scrambled sectors follow the same encoding process described in the Red Book in order to be stored
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Twisted pair cabling is a type of wiring in which two conductors of a single circuit are twisted together for the purposes of improving electromagnetic compatibility. Compared to a single conductor or an untwisted balanced pair, a twisted pair reduces electromagnetic radiation from the pair and crosstalk between neighboring pairs and improves rejection of external electromagnetic interference, it was invented by Alexander Graham Bell. In a balanced line, the two wires carry equal and opposite signals, the destination detects the difference between the two; this is known as differential signaling. Noise sources introduce signals into the wires by coupling of electric or magnetic fields and tend to couple to both wires equally; the noise thus produces a common-mode signal which can be canceled at the receiver when the difference signal is taken. Differential signaling starts to fail; this problem is apparent in telecommunication cables where pairs in the same cable lie next to each other for many miles.
Twisting the pairs counters this effect as on each half twist the wire nearest to the noise-source is exchanged. Provided the interfering source remains uniform, or nearly so, over the distance of a single twist, the induced noise will remain common-mode; the twist rate makes up part of the specification for a given type of cable. When nearby pairs have equal twist rates, the same conductors of the different pairs may lie next to each other undoing the benefits of differential mode. For this reason it is specified that, at least for cables containing small numbers of pairs, the twist rates must differ. In contrast to shielded or foiled twisted pair, UTP cable is not surrounded by any shielding. UTP is the primary wire type for telephone usage and is common for computer networking; the earliest telephones used open-wire single-wire earth return circuits. In the 1880s electric trams were installed in many cities. Lawsuits being unavailing, the telephone companies converted to balanced circuits, which had the incidental benefit of reducing attenuation, hence increasing range.
As electrical power distribution became more commonplace, this measure proved inadequate. Two wires, strung on either side of cross bars on utility poles, shared the route with electrical power lines. Within a few years, the growing use of electricity again brought an increase of interference, so engineers devised a method called wire transposition, to cancel out the interference. In wire transposition, the wires exchange position once every several poles. In this way, the two wires would receive similar EMI from power lines; this represented an early implementation of twisting, with a twist rate of about four twists per kilometre, or six per mile. Such open-wire balanced lines with periodic transpositions still survive today in some rural areas. Twisted-pair cabling was invented by Alexander Graham Bell in 1881. By 1900, the entire American telephone line network was either twisted pair or open wire with transposition to guard against interference. Today, most of the millions of kilometres of twisted pairs in the world are outdoor landlines, owned by telephone companies, used for voice service, only handled or seen by telephone workers.
Unshielded twisted pair cables are found in many Ethernet networks and telephone systems. For indoor telephone applications, UTP is grouped into sets of 25 pairs according to a standard 25-pair color code developed by AT&T Corporation. A typical subset of these colors shows up in most UTP cables; the cables are made with copper wires measured at 22 or 24 American Wire Gauge, with the colored insulation made from an insulator such as polyethylene or FEP and the total package covered in a polyethylene jacket. For urban outdoor telephone cables containing hundreds or thousands of pairs, the cable is divided into small but identical bundles; each bundle consists of twisted pairs. The bundles are in turn twisted together to make up the cable. Pairs having the same twist rate within the cable can still experience some degree of crosstalk. Wire pairs are selected to minimize crosstalk within a large cable. UTP cable is the most common cable used in computer networking. Modern Ethernet, the most common data networking standard, can use UTP cables.
Twisted pair cabling is used in data networks for short and medium length connections because of its lower costs compared to optical fiber and coaxial cable. UTP is finding increasing use in video applications in security cameras. Many cameras include a UTP output with screw terminals; as UTP is a balanced transmission line, a balun is needed to connect to unbalanced equipment, for example any using BNC connectors and designed for coaxial cable. Twisted pair cables incorporate shielding in an attempt to prevent electromagnetic interference. Shielding provides an electrically conductive barrier to attenuate electromagnetic waves external to the shield; such shielding can be applied to individual quads. Individual pairs are foil shielded, while an overall cable may use any of braided screen or foi