Video is an electronic medium for the recording, playback and display of moving visual media. Video was first developed for mechanical television systems, which were replaced by cathode ray tube systems which were replaced by flat panel displays of several types. Video systems vary in display resolution, aspect ratio, refresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcast, magnetic tape, optical discs, computer files, network streaming. Video technology was first developed for mechanical television systems, which were replaced by cathode ray tube television systems, but several new technologies for video display devices have since been invented. Video was exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder. In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape.
Video recorders were sold for US $50,000 in 1956, videotapes cost US $300 per one-hour reel. However, prices dropped over the years; the use of digital techniques in video created digital video, which allows higher quality and much lower cost than earlier analog technology. After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allows inexpensive personal computers and smartphones to capture, store and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production; the advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world. As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is converging with digital film technology.
Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards and SECAM specify 25 frame/s. Film is shot at the slower frame rate of 24 frames per second, which complicates the process of transferring a cinematic motion picture to video; the minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second. Video can be progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence; when displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.
In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, captured as two fields: an odd field consisting of the odd-numbered lines and an field consisting of the even-numbered lines. Analog display devices reproduce each frame doubling the frame rate as far as perceptible overall flicker is concerned; when the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is doubled as well, resulting in smoother, more lifelike reproduction of moving parts of the image when viewed on an interlaced CRT display. NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications include an i to indicate interlacing. For example, PAL video format is described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, 50 indicates 50 fields per second; when displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or "comb" effects in moving parts of the image which appear unless special signal processing eliminates them.
A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, produce video quality, equivalent to true progressive scan source material. Aspect ratio describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are rectangular, so can be described by a ratio between width and height; the ratio width to height for a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack is 1.375:1. Pixels on computer monitors are square, but pixels used in digital video have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Session Initiation Protocol
The Session Initiation Protocol is a signaling protocol used for initiating and terminating real-time sessions that include voice and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol networks as well as mobile phone calling over LTE; the protocol defines the specific format of messages exchanged and the sequence of communications for cooperation of the participants. SIP is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol and the Simple Mail Transfer Protocol. A call established with SIP may consist of multiple media streams, but no separate streams are required for applications, such as text messaging, that exchange data as payload in the SIP message. SIP works in conjunction with several other protocols that carry the session media. Most media type and parameter negotiation and media setup is performed with the Session Description Protocol, carried as payload in SIP messages.
SIP is designed to be independent of the underlying transport layer protocol, can be used with the User Datagram Protocol, the Transmission Control Protocol, the Stream Control Transmission Protocol. For secure transmissions of SIP messages over insecure network links, the protocol may be encrypted with Transport Layer Security. For the transmission of media streams the SDP payload carried in SIP messages employs the Real-time Transport Protocol or the Secure Real-time Transport Protocol. SIP was designed by Mark Handley, Henning Schulzrinne, Eve Schooler and Jonathan Rosenberg in 1996; the protocol was standardized as RFC 2543 in 1999. In November 2000, SIP was accepted as a 3GPP signaling protocol and permanent element of the IP Multimedia Subsystem architecture for IP-based streaming multimedia services in cellular networks. In June 2002 the specification was revised in RFC 3261 and various extensions and clarifications have been published since. SIP was designed to provide a signaling and call setup protocol for IP-based communications supporting the call processing functions and features present in the public switched telephone network with a vision of supporting new multimedia applications.
It has been extended for video conferencing, streaming media distribution, instant messaging, presence information, file transfer, Internet fax and online games. SIP is distinguished by its proponents for having roots in the Internet community rather than in the telecommunications industry. SIP has been standardized by the IETF, while other protocols, such as H.323, have traditionally been associated with the International Telecommunication Union. SIP is only involved for the signaling operations of a media communication session and is used to set up and terminate voice or video calls. SIP can be used to establish multiparty sessions, it allows modification of existing calls. The modification can involve changing addresses or ports, inviting more participants, adding or deleting media streams. SIP has found applications in messaging applications, such as instant messaging, event subscription and notification. SIP works in conjunction with several other protocols that specify the media format and coding and that carry the media once the call is set up.
For call setup, the body of a SIP message contains a Session Description Protocol data unit, which specifies the media format and media communication protocol. Voice and video media streams are carried between the terminals using the Real-time Transport Protocol or Secure Real-time Transport Protocol; every resource of a SIP network, such as user agents, call routers, voicemail boxes, are identified by a Uniform Resource Identifier. The syntax of the URI follows the general standard syntax used in Web services and e-mail; the URI scheme used for SIP is sip and a typical SIP URI has the form sip:username@domainname or sip:username@hostport, where domainname requires DNS SRV records to locate the servers for SIP domain while hostport can be an IP address or a qualified domain name of the host and port. If secure transmission is required, the scheme sips is used. SIP employs design elements similar to the HTTP request/response transaction model; each transaction consists of a client request that invokes a particular method or function on the server and at least one response.
SIP reuses most of the header fields, encoding rules and status codes of HTTP, providing a readable text-based format. SIP can be carried by several transport layer protocols including Transmission Control Protocol, User Datagram Protocol, Stream Control Transmission Protocol. SIP clients use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is used for non-encrypted signaling traffic whereas port 5061 is used for traffic encrypted with Transport Layer Security. SIP-based telephony networks implement call processing features of Signaling System 7, for which special SIP protocol extensions exist, although the two protocols themselves are different. SS7 is a centralized protocol, characterized by a complex central network architecture and dumb endpoints. SIP is a client-server protocol of equipotent peers. SIP features are implemented in the communicating endpoints, while the traditional SS7 architecture is in use only between switching centers; the network elements that use the Session Initiation Protocol for commun
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node. A router is connected to two or more data lines from different networks; when a data packet comes in on one of the lines, the router reads the network address information in the packet to determine the ultimate destination. Using information in its routing table or routing policy, it directs the packet to the next network on its journey; the most familiar type of routers are home and small office routers that forward IP packets between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an Internet service provider. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
Though routers are dedicated hardware devices, software-based routers exist. When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol; each router builds up a routing table listing the preferred routes between any two systems on the interconnected networks. A router has two types of network element components organized onto separate planes: Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, through which physical interface connection, it does this using internal preconfigured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table; the control-plane logic strips non-essential directives from the table and builds a forwarding information base to be used by the forwarding plane. Forwarding plane: The router forwards data packets between incoming and outgoing interface connections.
It forwards them to the correct network type using information that the packet header contains matched to entries in the FIB supplied by the control plane. A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission, it can support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' networks; the largest routers may be used in large enterprise networks. Smaller routers provide connectivity for typical home and office networks. All sizes of routers may be found inside enterprises; the most powerful routers are found in ISPs, academic and research facilities.
Large businesses may need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use. Access routers, including small office/home office models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt or DD-WRT. Distribution routers aggregate traffic from multiple access routers. Distribution routers are responsible for enforcing quality of service across a wide area network, so they may have considerable memory installed, multiple WAN interface connections, substantial onboard data processing routines, they may provide connectivity to groups of file servers or other external networks. In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations.
They lack some of the features of edge routers. External networks must be considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, other security functions, or these may be handled by separate devices. Routers commonly perform network address translation which restricts connections initiated from external connections but is not recognised as a security feature by all experts.. Some experts argue that open source routers are more secure and reliable than closed source routers because open source routers allow mistakes to be found and corrected. Routers are often distinguished on the basis of the network in which they operate. A router in a local area network of a single organisation is called an interior router. A router, operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network is called a border router, or gateway router. Routers intended for ISP and major enterprise connectivity exchange routing information using the Border Gateway Protocol.
RFC 4098 standard defines the types of BGP routers according to their functions: Edge router: Also called a provider edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP
Hypertext Transfer Protocol Secure is an extension of the Hypertext Transfer Protocol. It is used for secure communication over a computer network, is used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security, or its predecessor, Secure Sockets Layer; the protocol is therefore often referred to as HTTP over TLS, or HTTP over SSL. The principal motivation for HTTPS is authentication of the accessed website and protection of the privacy and integrity of the exchanged data while in transit, it protects against man-in-the-middle attacks. The bidirectional encryption of communications between a client and server protects against eavesdropping and tampering of the communication. In practice, this provides a reasonable assurance that one is communicating without interference by attackers with the website that one intended to communicate with, as opposed to an impostor. HTTPS connections were used for payment transactions on the World Wide Web, e-mail and for sensitive transactions in corporate information systems.
Since 2018, HTTPS is used more by web users than the original non-secure HTTP to protect page authenticity on all types of websites. The Uniform Resource Identifier scheme HTTPS has identical usage syntax to the HTTP scheme. However, HTTPS signals the browser to use an added encryption layer of SSL/TLS to protect the traffic. SSL/TLS is suited for HTTP, since it can provide some protection if only one side of the communication is authenticated; this is the case with HTTP transactions over the Internet, where only the server is authenticated. HTTPS creates a secure channel over an insecure network; this ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted. Because HTTPS piggybacks HTTP on top of TLS, the entirety of the underlying HTTP protocol can be encrypted; this includes the request URL, query parameters and cookies. However, because host addresses and port numbers are part of the underlying TCP/IP protocols, HTTPS cannot protect their disclosure.
In practice this means that on a configured web server, eavesdroppers can infer the IP address and port number of the web server that one is communicating with, as well as the amount and duration of the communication, though not the content of the communication. Web browsers know how to trust HTTPS websites based on certificate authorities that come pre-installed in their software. Certificate authorities are in this way being trusted by web browser creators to provide valid certificates. Therefore, a user should trust an HTTPS connection to a website if and only if all of the following are true: The user trusts that the browser software implements HTTPS with pre-installed certificate authorities; the user trusts the certificate authority to vouch only for legitimate websites. The website provides a valid certificate; the certificate identifies the website. The user trusts. HTTPS is important over insecure networks, as anyone on the same local network can packet-sniff and discover sensitive information not protected by HTTPS.
Additionally, many free to use and paid WLAN networks engage in packet injection in order to serve their own ads on webpages. However, this can be exploited maliciously in many ways, such as injecting malware onto webpages and stealing users' private information. HTTPS is very important for connections over the Tor anonymity network, as malicious Tor nodes can damage or alter the contents passing through them in an insecure fashion and inject malware into the connection; this is one reason why the Electronic Frontier Foundation and the Tor project started the development of HTTPS Everywhere, included in the Tor Browser Bundle. As more information is revealed about global mass surveillance and criminals stealing personal information, the use of HTTPS security on all websites is becoming important regardless of the type of Internet connection being used. While metadata about individual pages that a user visits is not sensitive, when combined, they can reveal a lot about the user and compromise the user's privacy.
Deploying HTTPS allows the use of HTTP/2, that are new generations of HTTP, designed to reduce page load times and latency. It is recommended to use HTTP Strict Transport Security with HTTPS to protect users from man-in-the-middle attacks SSL stripping. HTTPS should not be confused with the little-used Secure HTTP specified in RFC 2660; as of April 2018, 33.2% of Alexa top 1,000,000 websites use HTTPS as default, 57.1% of the Internet's 137,971 most popular websites have a secure implementation of HTTPS, 70% of page loads use HTTPS. Most browsers display a warning. Older browsers, when connecting to a site wit
Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line login and remote command execution, but any network service can be secured with SSH. SSH provides a secure channel over an unsecured network in a client–server architecture, connecting an SSH client application with an SSH server; the protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The standard TCP port for SSH is 22. SSH is used to access Unix-like operating systems, but it can be used on Microsoft Windows. Windows 10 uses OpenSSH as its default SSH client. SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rlogin and rexec protocols; those protocols send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis. The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet, although files leaked by Edward Snowden indicate that the National Security Agency can sometimes decrypt SSH, allowing them to read the contents of SSH sessions.
SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary. There are several ways to use SSH. Another is to use a manually generated public-private key pair to perform the authentication, allowing users or programs to log in without having to specify a password. In this scenario, anyone can produce a matching pair of different keys; the public key is placed on all computers that must allow access to the owner of the matching private key. While authentication is based on the private key, the key itself is never transferred through the network during authentication. SSH only verifies whether the same person offering the public key owns the matching private key. In all versions of SSH it is important to verify unknown public keys, i.e. associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user. On Unix-like systems, the list of authorized public keys is stored in the home directory of the user, allowed to log in remotely, in the file ~/.ssh/authorized_keys.
This file is respected by SSH only if it is not writable by anything apart from the root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase; the private key can be looked for in standard places, its full path can be specified as a command line setting. The ssh-keygen utility produces the private keys, always in pairs. SSH supports password-based authentication, encrypted by automatically generated keys. In this case, the attacker could imitate the legitimate server side, ask for the password, obtain it. However, this is possible only if the two sides have never authenticated before, as SSH remembers the key that the server side used; the SSH client raises a warning before accepting the key of a new unknown server. Password authentication can be disabled. SSH is used to log into a remote machine and execute commands, but it supports tunneling, forwarding TCP ports and X11 connections.
SSH uses the client-server model. The standard TCP port 22 has been assigned for contacting SSH servers. An SSH client program is used for establishing connections to an SSH daemon accepting remote connections. Both are present on most modern operating systems, including macOS, most distributions of Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably, versions of Windows prior to 1709 do not include SSH by default. Proprietary and open source versions of various levels of complexity and completeness exist. File managers for UNIX-like systems can use the FISH protocol to provide a split-pane GUI with drag-and-drop; the open source Windows program WinSCP provides similar file management capability using PuTTY as a back-end. Both WinSCP and PuTTY are available packaged to run directly off a USB drive, without requiring installation on the client machine. Setting up an SSH server in Windows involves enabling a feature in Settings app. In Windows 10 version 1709, an official Win32 port of OpenSSH is available.
SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine. In 1995, Tatu Ylönen, a researcher at Helsinki University of Technology, designed the first version of the protocol prompted by a password-sniffing attack at his university network; the goal of SSH was to replace the earlier rlogin, TELNET, FTP and rsh protocols, which did not provide strong authentication nor guarantee confidentiality. Ylönen released his implementation as freeware in July 1995, an
Copper is a chemical element with symbol Cu and atomic number 29. It is a soft and ductile metal with high thermal and electrical conductivity. A freshly exposed surface of pure copper has a pinkish-orange color. Copper is used as a conductor of heat and electricity, as a building material, as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins, constantan used in strain gauges and thermocouples for temperature measurement. Copper is one of the few metals; this led to early human use in several regions, from c. 8000 BC. Thousands of years it was the first metal to be smelted from sulfide ores, c. 5000 BC, the first metal to be cast into a shape in a mold, c. 4000 BC and the first metal to be purposefully alloyed with another metal, tin, to create bronze, c. 3500 BC. In the Roman era, copper was principally mined on Cyprus, the origin of the name of the metal, from aes сyprium corrupted to сuprum, from which the words derived and copper, first used around 1530.
The encountered compounds are copper salts, which impart blue or green colors to such minerals as azurite and turquoise, have been used and as pigments. Copper used in buildings for roofing, oxidizes to form a green verdigris. Copper is sometimes used in decorative art, both in its elemental metal form and in compounds as pigments. Copper compounds are used as bacteriostatic agents and wood preservatives. Copper is essential to all living organisms as a trace dietary mineral because it is a key constituent of the respiratory enzyme complex cytochrome c oxidase. In molluscs and crustaceans, copper is a constituent of the blood pigment hemocyanin, replaced by the iron-complexed hemoglobin in fish and other vertebrates. In humans, copper is found in the liver and bone; the adult body contains between 2.1 mg of copper per kilogram of body weight. Copper and gold are in group 11 of the periodic table; the filled d-shells in these elements contribute little to interatomic interactions, which are dominated by the s-electrons through metallic bonds.
Unlike metals with incomplete d-shells, metallic bonds in copper are lacking a covalent character and are weak. This observation explains the low high ductility of single crystals of copper. At the macroscopic scale, introduction of extended defects to the crystal lattice, such as grain boundaries, hinders flow of the material under applied stress, thereby increasing its hardness. For this reason, copper is supplied in a fine-grained polycrystalline form, which has greater strength than monocrystalline forms; the softness of copper explains its high electrical conductivity and high thermal conductivity, second highest among pure metals at room temperature. This is because the resistivity to electron transport in metals at room temperature originates from scattering of electrons on thermal vibrations of the lattice, which are weak in a soft metal; the maximum permissible current density of copper in open air is 3.1×106 A/m2 of cross-sectional area, above which it begins to heat excessively. Copper is one of a few metallic elements with a natural color other than silver.
Pure copper acquires a reddish tarnish when exposed to air. The characteristic color of copper results from the electronic transitions between the filled 3d and half-empty 4s atomic shells – the energy difference between these shells corresponds to orange light; as with other metals, if copper is put in contact with another metal, galvanic corrosion will occur. Copper does not react with water, but it does react with atmospheric oxygen to form a layer of brown-black copper oxide which, unlike the rust that forms on iron in moist air, protects the underlying metal from further corrosion. A green layer of verdigris can be seen on old copper structures, such as the roofing of many older buildings and the Statue of Liberty. Copper tarnishes when exposed to some sulfur compounds, with which it reacts to form various copper sulfides. There are 29 isotopes of copper. 63Cu and 65Cu are stable, with 63Cu comprising 69% of occurring copper. The other isotopes are radioactive, with the most stable being 67Cu with a half-life of 61.83 hours.
Seven metastable isotopes have been characterized. Isotopes with a mass number above 64 decay by β−, whereas those with a mass number below 64 decay by β+. 64Cu, which has a half-life of 12.7 hours, decays both ways.62Cu and 64Cu have significant applications. 62Cu is used in 62Cu-PTSM as a radioactive tracer for positron emission tomography. Copper is produced in massive stars and is present in the Earth's crust in a proportion of about 50 parts per million. In nature, copper occurs in a variety of minerals, including native copper, copper sulfides such as chalcopyrite, digenite and chalcocite, copper sulfosalts such as tetrahedite-tennantite, enargite, copper carbonates such as azurite and malachite, as copper or copper oxides such as cuprite and tenorite, respectively; the largest mass of elemental copper discovered weighed 420 tonnes and was found in 1857 on the Keweenaw Peninsula in Michigan, US. Native copper is a polycrystal