Deze categorie bevat de volgende 3 ondercategorieën, van een totaal van 3.
Artikelen in de categorie "Computerrandapparatuur"
Deze categorie bevat de volgende 28 pagina’s, van in totaal 28.
Deze categorie bevat de volgende 3 ondercategorieën, van een totaal van 3.
Deze categorie bevat de volgende 28 pagina’s, van in totaal 28.
1. Randapparaat – A peripheral is an ancillary device used to put information into and get information out of the computer. Touchscreens are an example that combines different devices into a hardware component that can be used both as an input and output device. A peripheral device is defined as any auxiliary device such as a computer mouse or keyboard that connects to. Other examples of peripherals are image scanners, tape drives, microphones, loudspeakers, webcams, common input peripherals include keyboards, computer mice, graphic tablets, touchscreens, barcode readers, image scanners, microphones, webcams, game controllers, light pens, and digital cameras. Common output peripherals include computer displays, printers, projectors, computer hardware Controller Display device Expansion card Punched card input/output Punched tape Video game accessory
2. Wikimedia Commons – Wikimedia Commons is an online repository of free-use images, sound, and other media files. It is a project of the Wikimedia Foundation, the repository contains over 38 million media files. In July 2013, the number of edits on Commons reached 100,000,000, the project was proposed by Erik Möller in March 2004 and launched on September 7,2004. The expression educational is to be according to its broad meaning of providing knowledge. Wikimedia Commons itself does not allow fair use or uploads under non-free licenses, for this reason, Wikimedia Commons always hosts freely licensed media and deletes copyright violations. The default language for Commons is English, but registered users can customize their interface to use any other user interface translations. Many content pages, in particular policy pages and portals, have also translated into various languages. Files on Wikimedia Commons are categorized using MediaWikis category system, in addition, they are often collected on individual topical gallery pages. While the project was proposed to also contain free text files. In 2012, BuzzFeed described Wikimedia Commons as littered with dicks, in 2010, Wikipedia co-founder Larry Sanger reported Wikimedia Commons to the FBI for hosting sexualized images of children known as lolicon. Wales responded to the backlash from the Commons community by voluntarily relinquishing some site privileges, over time, additional functionality has been developed to interface Wikimedia Commons with the other Wikimedia projects. Specialized uploading tools and scripts such as Commonist have been created to simplify the process of uploading large numbers of files. In order to free content photos uploaded to Flickr, users can participate in a defunct collaborative external review process. The site has three mechanisms for recognizing quality works, one is known as Featured pictures, where works are nominated and other community members vote to accept or reject the nomination. This process began in November 2004, another process known as Quality images began in June 2006, and has a simpler nomination process comparable to Featured pictures. Quality images only accepts works created by Wikimedia users, whereas Featured pictures additionally accepts nominations of works by third parties such as NASA, the three mentioned processes select a slight part from the total number of files. However, Commons collects files of all quality levels, from the most professional level across simple documental, files with specific defects can be tagged for improvement and warning or even proposed for deletion but there exists no process of systematic rating of all files. The site held its inaugural Picture of the Year competition, for 2006, all images that were made a Featured picture during 2006 were eligible, and voted on by eligible Wikimedia users during two rounds of voting
3. 3D-printer – Objects can be of almost any shape or geometry and are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File file. The term 3D printing originally referred to a process that deposits a binder material onto a bed with inkjet printer heads layer by layer. More recently, the term is being used in popular vernacular to encompass a variety of additive manufacturing techniques. United States and global technical standards use the term additive manufacturing for this broader sense. Early additive manufacturing equipment and materials were developed in the 1980s, but on July 16,1984 Alain Le Méhauté, Olivier de Witte and Jean Claude André filed their patent for the stereolithography process. It was three weeks before Chuck Hull filed his own patent for stereolithography, the application of French inventors were abandoned by the French General Electric Company and CILAS. The claimed reason was for lack of business perspective, Hull defined the process as a system for generating three-dimensional objects by creating a cross-sectional pattern of the object to be formed, but this had been already invented by Kodama. Hulls contribution is the design of the STL file format widely accepted by 3D printing software as well as the digital slicing, the term 3D printing originally referred to a process employing standard and custom inkjet print heads. The technology used by most 3D printers to date—especially hobbyist and consumer-oriented models—is fused deposition modeling, AM processes for metal sintering or melting usually went by their own individual names in the 1980s and 1990s. But AM-type sintering was beginning to challenge that assumption, by the mid 1990s, new techniques for material deposition were developed at Stanford and Carnegie Mellon University, including microcasting and sprayed materials. Sacrificial and support materials had become more common, enabling new object geometries. The umbrella term additive manufacturing gained wider currency in the decade of the 2000s, as the various additive processes matured, it became clear that soon metal removal would no longer be the only metalworking process done under that type of control. It was during this decade that the term subtractive manufacturing appeared as a retronym for the family of machining processes with metal removal as their common theme. The term subtractive has not replaced the term machining, instead complementing it when a term that covers any removal method is needed, both terms reflect the simple fact that the technologies all share the common theme of sequential-layer material addition/joining throughout a 3D work envelope under automated control. The 2010s were the first decade in which metal end use parts such as engine brackets, agile tooling uses a cost effective and high quality method to quickly respond to customer and market needs. It can be used in hydro-forming, stamping, injection molding, as technology matured, several authors had begun to speculate that 3D printing could aid in sustainable development in the developing world. 3D printable models may be created with a computer-aided design package, via a 3D scanner, or by a digital camera. 3D printed models created with CAD result in reduced errors and can be corrected before printing, allowing verification in the design of the object before it is printed, the manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting
4. Cd-brander – Some drives can only read from certain discs, but recent drives can both read and record, also called burners or writers. Compact discs, DVDs, and Blu-ray discs are common types of media which can be read. Optical disc drives that are no longer in production include CD-ROM drive, CD writer drive, combo drive, as of 2015, DVD writer drive supporting all existing recordable and rewritable DVD formats is the most common for desktop PCs and laptops. There are also the DVD-ROM drive, BD-ROM drive, Blu-ray Disc combo drive and they are also very commonly used in computers to read software and consumer media distributed on disc, and to record discs for archival and data exchange purposes. Floppy disk drives, with capacity of 1, USB flash drives, high-capacity, small, and inexpensive, are suitable where read/write capability is required. Large backups are often made on external hard drives, as their price has dropped to a level making this viable. The first laser disc, demonstrated in 1972, was the Laservision 12-inch video disc, the video signal was stored as an analog format like a video cassette. The first digitally recorded optical disc was a 5-inch audio compact disc in a format created by Sony. The CD-ROM format was developed by Sony and Denon, introduced in 1984, as an extension of Compact Disc Digital Audio, the CD-ROM has a storage capacity of 650 MB. Also in 1984, Sony introduced a LaserDisc data storage format, in 1987, Sony demonstrated the erasable and rewritable 5. 25-inch optical drive. The first Blu-Ray prototype was unveiled by Sony in October 2000, technically Blu-ray Disc also required a thinner layer for the narrower beam and shorter wavelength blue laser. The first BD-ROM players were shipped in mid-June 2006, the first Blu-ray Disc titles were released by Sony and MGM on June 20,2006. The first mass-market Blu-ray Disc rewritable drive for the PC was the BWU-100A, initially, CD-type lasers with a wavelength of 780 nm were used. For DVDs, the wavelength was reduced to 650 nm, two main servomechanisms are used, the first to maintain the proper distance between lens and disc, to ensure the laser beam is focused as a small laser spot on the disc. The second servo moves the head along the discs radius, keeping the beam on the track. Optical disk media are read beginning at the radius to the outer edge. This is detected by photodiodes that create corresponding electrical signals, an optical disk recorder encodes data onto a recordable CD-R, DVD-R, DVD+R, or BD-R disc by selectively heating parts of an organic dye layer with a laser. This changes the reflectivity of the dye, thereby creating marks that can be read like the pits, for recordable discs, the process is permanent and the media can be written to only once
5. Computerterminal – A computer terminal is an electronic or electromechanical hardware device that is used for entering data into, and displaying data from, a computer or a computing system. The function of a terminal is confined to display and input of data, a terminal that depends on the host computer for its processing power is called a dumb terminal or thin client. A personal computer can run terminal emulator software that replicates the function of a terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system. The terminal of the first working programmable, fully automatic digital Turing-complete computer, the Z3, had a keyboard, early user terminals connected to computers were electromechanical teleprinters/teletypewriters, such as the Teletype Model 33 ASR, originally used for telegraphy or the Friden Flexowriter. Later printing terminals such as the DECwriter LA30 were developed, however printing terminals were limited by the speed at which paper could be printed, and for interactive use the paper record was unnecessary. The problem was that the amount of memory needed to store the information on a page of text was comparable to the memory in low end minicomputers then in use. Displaying the information at video speeds was also a challenge and the control logic took up a rack worth of pre-integrated circuit electronics. Another approach involved the use of the tube, a specialized CRT developed by Tektronix that retained information written on it without the need to refresh. The Datapoint 3300 from Computer Terminal Corporation was announced in 1967 and shipped in 1969 and it solved the memory space issue mentioned above by using a digital shift-register design, and using only 72 columns rather than the later more common choice of 80. Some type of blinking cursor that can be positioned, the term intelligent in this context dates from 1969. Notable examples include the IBM2250 and IBM2260, predecessors to the IBM3270, providing even more processing possibilities, workstations like the Televideo TS-800 could run CP/M-86, blurring the distinction between terminal and Personal Computer. Most terminals were connected to minicomputers or mainframe computers and often had a green or amber screen, typically terminals communicate with the computer via a serial port via a null modem cable, often using an EIA RS-232 or RS-422 or RS-423 or a current loop serial interface. In fact, the design for the Intel 8008 was originally conceived at Computer Terminal Corporation as the processor for the Datapoint 2200. While early IBM PCs had single color green screens, these screens were not terminals. The screen of a PC did not contain any character generation hardware, all signals and video formatting were generated by the video display card in the PC, or by the CPU. An IBM PC monitor, whether it was the monochrome display or the 16-color display, was technically much more similar to an analog TV set than to a terminal. With suitable software a PC could, however, emulate a terminal, the Data General One could be booted into terminal emulator mode from its ROM. Since the advent and subsequent popularization of the computer, few genuine hardware terminals are used to interface with computers today
6. CUDA – CUDA is a parallel computing platform and application programming interface model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit for general purpose processing – an approach termed GPGPU. The CUDA platform is a layer that gives direct access to the GPUs virtual instruction set and parallel computational elements. The CUDA platform is designed to work with programming such as C, C++. Also, CUDA supports programming frameworks such as OpenACC and OpenCL, when it was first introduced by Nvidia, the name CUDA was an acronym for Compute Unified Device Architecture, but Nvidia subsequently dropped the use of the acronym. The graphics processing unit, as a computer processor, addresses the demands of real-time high-resolution 3D graphics compute-intensive tasks. By 2012, GPUs had evolved into highly parallel multi-core systems allowing very efficient manipulation of large blocks of data, C/C++ programmers use CUDA C/C++, compiled with nvcc, Nvidias LLVM-based C/C++ compiler. Fortran programmers can use CUDA Fortran, compiled with the PGI CUDA Fortran compiler from The Portland Group, third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Haskell, R, MATLAB, IDL, and native support in Mathematica. In the computer industry, GPUs are used for graphics rendering. CUDA has also used to accelerate non-graphical applications in computational biology, cryptography. CUDA provides both a low level API and a higher level API, the initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was added in version 2.0. CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro, CUDA is compatible with most standard operating systems. Nvidia states that developed for the G8x series will also work without modification on all future Nvidia video cards. This can be used as a cache, enabling higher bandwidth than is possible using texture lookups. This was not always the case, earlier versions of CUDA were based on C syntax rules. Interoperability with rendering languages such as OpenGL is one-way, with OpenGL having access to registered CUDA memory, unlike OpenCL, CUDA-enabled GPUs are only available from Nvidia. No emulator or fallback functionality is available for modern revisions, valid C++ may sometimes be flagged and prevent compilation due to the way the compiler approaches optimization for target GPU device limitations
7. Dvd-brander – A DVD recorder is an optical disc recorder that uses Optical disc recording technologies to digitally record analog or digital signals onto blank writable DVD media. Such devices are available as either installable drives for computers or as stand alone components for use in studios or home theater systems. As of March 1,2007 all new tuner-equipped television devices manufactured or imported in the United States must include an ATSC tuner, NTSC DVD recorders are therefore undergoing a transformation, either adding a digital ATSC tuner or removing over-the-air broadcast television tuner capability entirely. However these DVD recorders can still record audio and analog video. Originally, DVD recorders supported one of three standards, DVD-RAM, DVD-RW, and DVD+RW, none of which are directly compatible. As a general rule, however, most current DVD drives support both the + and - standards, while few support the DVD-RAM standard, which is not directly compatible with standard DVD drives. Recording speed is generally denoted in values of X, where 1X in DVD usage is equal to 1.321 MB/s, in practice, this is largely confined to computer-based DVD recorders, since standalone units generally record in real time, that is, 1X speed. Recorders use a laser to read and write DVDs, the reading laser is usually not stronger than 5 mW, while the writing laser is considerably more powerful. The faster the speed is rated, the stronger the laser is. DVD burner lasers often peak at about 100-400 mW in continuous wave, some laser hobbyists have discovered ways to extract the laser diode from DVD burners and modify them to create laser apparatus that can cause burning. DVD recorder drives can be used in conjunction with DVD authoring software to create DVDs near or equal to commercial quality, and are also widely used for data backup and exchange. As a general rule, computer-based DVD recorders can also handle CD-R and CD-RW media, in fact, most internal drives are designed with parallel ATA interfaces, with SATA becoming more readily available. External drives almost always use USB2.0 or IEEE1394, DVD duplication systems are generally built out of stacks of these drives, connected through a computer-based backplane. When the standalone DVD recorder first appeared on the Japanese consumer market in 1999, however, as of early 2007, DVD recorders from notable brands are selling for US$200 or €150 and less, with even lower street prices. Early units supported only DVD-RAM and DVD-R discs, but the more recent units can record to all major formats DVD-R, DVD-RW, DVD+R, DVD+RW, DVD-R DL, some models now include mechanical hard disk drive-based digital video recorders to improve ease of use. Standalone DVD recorders generally have basic DVD authoring software built in, however, a year later, Panasonic introduced further more Blu-ray disc recorders with terrestrial HDTV tuners. DVD recorders have several advantages over VCRs, including, Superior video and audio quality Easy-to-handle smaller form-factor disc media. It does have some disadvantages, Slow initial access/load times due to the nature of the disc Limited rewritability on DVDRW discs
8. Geluidskaart – A sound card is an internal expansion card that provides input and output of audio signals to and from a computer under control of computer programs. The term sound card is applied to external audio interfaces used for professional audio applications. Sound functionality can also be integrated onto the motherboard, using components similar to found on plug-in cards. The integrated sound system is still referred to as a sound card. Most sound cards use a converter, which converts recorded or generated digital data into an analog format. The output signal is connected to an amplifier, headphones, or external device using standard interconnects, multichannel digital sound playback can also be used for music synthesis, when used with a compliance, and even multiple-channel emulation. This approach has become common as manufacturers seek simpler and lower-cost sound cards, most sound cards have a line in connector for an input signal from a cassette tape or other sound source that has higher voltage levels than a microphone. The sound card digitizes this signal, the DMAC transfers the samples to the main memory, from where a recording software may write it to the hard disk for storage, editing, or further processing. Another common external connector is the connector, for signals from a microphone or other low-level input device. Input through a microphone jack can be used, for example, an important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, sometimes, the terms voice and channel are used interchangeably to indicate the degree of polyphony, not the output speaker configuration. For example, many older sound chips could accommodate three voices, but only one channel for output, requiring all voices to be mixed together. Later cards, such as the AdLib sound card, had a 9-voice polyphony combined in 1 mono output channel, for some years, most PC sound cards have had multiple FM synthesis voices which were usually used for MIDI music. Modern low-cost integrated soundcards such as audio codecs like those meeting the AC97 standard and this is similar to the way inexpensive softmodems perform modem tasks in software rather than in hardware. Also, in the days of wavetable sample-based synthesis, some sound card manufacturers advertised polyphony solely on the MIDI capabilities alone. In this case, the output channel is irrelevant, typically. Instead, the polyphony measurement solely applies to the number of MIDI instruments the sound card is capable of producing at one given time, the final playback stage is performed by an external DAC with significantly fewer channels than voices. Connectors on the cards are color-coded as per the PC System Design Guide
9. Head-mounted display – A head-mounted display, both abbreviated HMD, is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one or each eye. A HMD has many uses including in gaming, aviation, engineering, there is also an optical head-mounted display, which is a wearable display that can reflect projected images and allows a user to see through it. A typical HMD has one or two small displays, with lenses and semi-transparent mirrors embedded in eyeglasses, a visor, or a helmet, the display units are miniaturised and may include cathode ray tubes, liquid crystal displays, liquid crystal on silicon, or organic light-emitting diodes. Some vendors employ multiple micro-displays to increase resolution and field of view. HMDs differ in whether they can display only computer-generated imagery, or only live imagery from the physical world, most HMDs can display only a computer-generated image, sometimes referred to as a virtual image. Some HMDs can allow a CGI to be superimposed on a real-world view and this is sometimes referred to as augmented reality or mixed reality. Combining real-world view with CGI can be done by projecting the CGI through a partially reflective mirror and this method is often called optical see-through. Combining real-world view with CGI can also be done electronically by accepting video from a camera and this method is often called video see-through. An optical head-mounted display uses an optical mixer which is made of partly silvered mirrors and it can reflect artificial images, and let real images cross the lens, and let a user look through it. Various methods have existed for see-through HMDs, most of which can be summarized into two families based on curved mirrors or waveguides. Curved mirrors have been used by Laster Technologies, and by Vuzix in their Star 1200 product, various waveguide methods have existed for years. These include diffraction optics, holographic optics, polarized optics, major HMD applications include military, government, and civilian-commercial. In 1962, Hughes Aircraft Company revealed the Electrocular, a compact CRT, ruggedized HMDs are increasingly being integrated into the cockpits of modern helicopters and fighter aircraft. These are usually integrated with the pilots flying helmet and may include protective visors, night vision devices. Military, police, and firefighters use HMDs to display information such as maps or thermal imaging data while viewing a real scene. Recent applications have included the use of HMD for paratroopers, in 2005, the Liteye HMD was introduced for ground combat troops as a rugged, waterproof lightweight display that clips into a standard US PVS-14 military helmet mount. The self-contained color monocular organic light-emitting diode display replaces the NVG tube, the LE has see-through ability and can be used as a standard HMD or for augmented reality applications. The design is optimized to provide high definition data under all lighting conditions, the LE has a low power consumption, operating on four AA batteries for 35 hours or receiving power via standard Universal Serial Bus connection
10. Hybride schijf – The purpose of the SSD in a hybrid drive is to act as a cache for the data stored on the HDD, improving the overall performance by keeping copies of the most frequently used data on the faster SSD. There are two configurations for implementing hybrid drives, dual-drive hybrid systems and solid-state hybrid drives. There are two main hybrid storage technologies that combine NAND flash memory or SSDs, with the HDD technology, dual-drive hybrid systems, dual-drive hybrid systems combine the usage of separate SSD and HDD devices installed in the same computer. Overall performance optimizations are managed in one of three ways, By the computer user, who places more frequently accessed data onto the faster drive. By the computers operating system software, which combines SSD and HDD into a single hybrid volume, examples of hybrid volumes implementations in operating systems are bcache and dm-cache on Linux, and Apple’s Fusion Drive and other Logical Volume Management based implementations on OS X. By chipsets external to the individual storage drives, an example is the use of flash cache modules. FCMs combine the use of separate SSD and HDD components, while managing performance optimizations via host software, device drivers, what distinguished this dual-drive system from an SSHD system is that each drive maintains its ability to be addressed independently by the operating system if desired. Solid-state hybrid drive refers to products that incorporate a significant amount of NAND flash memory into a disk drive, resulting in a single. The fundamental design principle behind SSHDs is to identify data elements that are most directly associated with performance and this has been shown to be effective in delivering significantly improved performance over the standard HDD. In the two forms of hybrid storage technologies, the goal is to combine HDD and a technology to provide a balance of improved performance. In general, this is achieved by placing hot data, or data that is most directly associated with improved performance, making decisions about which data elements are prioritized for NAND flash memory is at the core of SSHD technology. Products offered by various vendors may achieve this through device firmware, through device drivers or through software modules and this mode results in a storage product that appears and operates to a host system exactly as a traditional hard drive would. Some of the features of SSHD drives, such as the host-hinted mode. Microsoft added support for the operation into Windows 8.1, while patches for the Linux kernel are available since October 2014. Both models were 2. 5-inch drives, featuring 128 MB or 256 MB NAND flash memory options, seagate’s Momentus PSD emphasized power efficiency for a better mobile experience and relied on Microsoft Vista’s ReadyDrive. The products were not widely adopted, in May 2010, Seagate introduced a new hybrid product called the Momentus XT and used the term solid-state hybrid drive. This product focused on delivering the benefits of hard drive capacity points with SSD-like performance. It shipped as a 500 GB HDD with 4 GB of integrated NAND flash memory, in November 2011, Seagate introduced what they referred to as their second-generation SSHD, which increased the capacity to 750 GB and pushed the integrated NAND flash memory to 8 GB