1.
Dressing (medical)
–
A dressing is a sterile pad or compress applied to a wound to promote healing and protect the wound from further harm. A dressing is designed to be in contact with the wound, as distinguished from a bandage. A dressing can have a number of purposes, depending on the type, severity and position of the wound, although all purposes are focused towards promoting recovery and protecting from further harm. Ultimately, the aim of a dressing is to promote healing of the wound by providing a sterile, breathable and this will then reduce the risk of infection, help the wound heal more quickly, and reduce scarring. Historically, dressings were made of a piece of material, usually a cloth, however, modern dressings include dry or impregnated gauze, plastic films, gels, foams, hydrocolloids, alginates, hydrogels, and polysaccharide pastes, granules and beads. Pressure dressings are used to treat burns and after skin grafts. They apply pressure and prevent fluids from collecting in the tissue, dressings can also regulate the chemical environment of a wound, usually with the aim of preventing infection by the impregnation of topical antiseptic chemicals. Commonly used antiseptics include povidone-iodine, boracic lint dressings or historically castor oil, antibiotics are also often used with dressings to prevent bacterial infection. Medical grade honey is another option, and there is moderate evidence that honey dressings are more effective than common antiseptic. Bioelectric dressings can be effective in attacking certain antibiotic-resistant bacteria and speeding up the healing process, dressings are also often impregnated with analgesics to reduce pain. The physical features of a dressing can impact the efficacy of such topical medications, occlusive dressings, made from substances impervious to moisture such as plastic or latex, can be used to increase their rate of absorption into the skin. Dressings are usually secured with adhesive tape and/or a bandage, many dressings today are produced as an island surrounded by an adhesive backing, ready for immediate application – these are known as island dressings. Gauze dressings are the most commonly used dressing due to their simplicity and inexpensiveness, constructed from an open-weave fabric, usually cotton, traditional gauze dressings function as an absorbent, breathable and protective pad for a wound. Non-stick gauze island dressings are the most common type of dressing today – an example is the Band-Aid, advancements in understanding of wounds have commanded biomedical innovations in the treatment of acute, chronic, and other types of wounds. Many biologics, skin substitutes, biomembranes and scaffolds have been developed to facilitate wound healing through various mechanisms, applying a dressing is a first aid skill, although many people undertake the practice with no training – especially on minor wounds. Modern dressings will almost all come in a prepackaged sterile wrapping, sterility is necessary to prevent infection from pathogens resident within the dressing. Historically, and still the case in less developed areas and in an emergency. This can consist of anything, including clothing or spare material, applying and changing dressings is one common task in nursing
2.
Data compression
–
In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy, no information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information, the process of reducing the size of a data file is referred to as data compression. In the context of data transmission, it is called coding in opposition to channel coding. Compression is useful because it reduces resources required to store and transmit data, computational resources are consumed in the compression process and, usually, in the reversal of the process. Data compression is subject to a space–time complexity trade-off, Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy, for example, an image may have areas of color that do not change over several pixels, instead of coding red pixel, red pixel. The data may be encoded as 279 red pixels and this is a basic example of run-length encoding, there are many schemes to reduce file size by eliminating redundancy. The Lempel–Ziv compression methods are among the most popular algorithms for lossless storage, DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. DEFLATE is used in PKZIP, Gzip, and PNG, LZW is used in GIF images. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data, for most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded, current LZ-based coding schemes that perform well are Brotli and LZX. LZX is used in Microsofts CAB format, the best modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as a form of statistical modelling. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string, sequitur and Re-Pair are practical grammar compression algorithms for which software is publicly available. In a further refinement of the use of probabilistic modelling. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a machine to produce a string of encoded bits from a series of input data symbols
3.
Unix shell
–
A Unix shell is a command-line interpreter or shell that provides a traditional Unix-like command line user interface. Users direct the operation of the computer by entering commands as text for a command line interpreter to execute, users typically interact with a Unix shell using a terminal emulator, however, direct operation via serial hardware connections, or networking session, are common for server systems. All Unix shells provide filename wildcarding, piping, here documents, command substitution, variables and control structures for condition-testing, the most generic sense of the term shell means any program that users employ to type commands. In Unix-like operating systems, users typically have many choices of command-line interpreters for interactive sessions, when a user logs in to the system interactively, a shell program is automatically executed for the duration of the session. The Unix shell is both a command language as well as a scripting programming language, and is used by the operating system as the facility to control the execution of the system. Shells created for operating systems often provide similar functionality. On hosts with a system, like macOS, some users may never use the shell directly. However, some vendors have replaced the traditional shell-based startup system with different approaches. The first Unix shell was the Thompson shell, sh, written by Ken Thompson at Bell Labs and distributed with Versions 1 through 6 of Unix, though not in current use, it is still available as part of some Ancient UNIX Systems. It was modeled after the Multics shell, itself modeled after the RUNCOM program Louis Pouzin showed to the Multics Team, the rc suffix on some Unix configuration files, is a remnant of the RUNCOM ancestry of Unix shells. The PWB shell or Mashey shell, sh, was a version of the Thompson shell, augmented by John Mashey and others and distributed with the Programmers Workbench UNIX. It focused on making shell programming practical, especially in large shared computing centers and it added shell variables, user-executable shell scripts, and interrupt-handling. Control structures were extended from if/goto to if/then/else/endif, switch/breaksw/endsw, as shell programming became widespread, these external commands were incorporated into the shell itself for performance. But the most widely distributed and influential of the early Unix shells were the Bourne shell, both shells have been used as the coding base and model for many derivative and work-alike shells with extended feature sets. The Bourne shell, sh, was a rewrite by Stephen Bourne at Bell Labs. The language, including the use of a keyword to mark the end of a block, was influenced by ALGOL68. Traditionally, the Bourne shell program name is sh and its path in the Unix file system hierarchy is /bin/sh, but a number of compatible work-alikes are also available with various improvements and additional features. The sh of FreeBSD, NetBSD are based on ash that has enhanced to be POSIX conformant for the occasion
4.
Gzip
–
Gzip is a file format and a software application used for file compression and decompression. The program was created by Jean-loup Gailly and Mark Adler as a free replacement for the compress program used in early Unix systems. Version 0.1 was first publicly released on 31 October 1992, Gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding. DEFLATE was intended as a replacement for LZW and other patent-encumbered data compression algorithms which, at the time, limited the usability of compress, although its file format also allows for multiple such streams to be concatenated, gzip is normally used to compress just single files. Compressed archives are created by assembling collections of files into a single tar archive. The final. tar. gz or. tgz file is called a tarball. Gzip is not to be confused with the ZIP archive format, various implementations of the program have been written. The most commonly known is the GNU Projects implementation using Lempel-Ziv coding, openBSDs version of gzip is actually the compress program, to which support for the gzip format was added in OpenBSD3.4. The g in this specific version stands for gratis and these implementations originally come from NetBSD, and supports decompression of bzip2 and the Unix pack format. The tar utility included in most Linux distributions can extract. tar. gz files by passing the z option, zlib is an abstraction of the DEFLATE algorithm in library form which includes support both for the gzip file format and a lightweight stream format in its API. The zlib stream format, DEFLATE, and the file format were standardized respectively as RFC1950, RFC1951. The gzip format is used in HTTP compression, a used to speed up the sending of HTML. It is one of the three formats for HTTP compression as specified in RFC2616. This RFC also specifies a zlib format, which is equal to the gzip format except that gzip adds eleven bytes of overhead in the form of headers and trailers. Still, the format is sometimes recommended over zlib because Microsoft Internet Explorer does not implement the standard correctly. Zlib DEFLATE is used internally by the Portable Network Graphics format, since the late 1990s, bzip2, a file compression utility based on a block-sorting algorithm, has gained some popularity as a gzip replacement. It produces considerably smaller files, but at the cost of memory, comparison of file archivers Free file format List of archive formats List of Unix programs GNU Gzip home page Original gzip Home Page
5.
Bzip2
–
Bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler algorithm. It only compresses single files and is not a file archiver and it is developed and maintained by Julian Seward. Seward made the first public release of bzip2, version 0.15, the compressors stability and popularity grew over the next several years, and Seward released version 1.0 in late 2000. Bzip2 compresses most files more effectively than the older LZW and Deflate compression algorithms, LZMA is generally more space-efficient than bzip2 at the expense of even slower compression speed, while having much faster decompression. Bzip2 compresses data in blocks of size between 100 and 900 kB and uses the Burrows–Wheeler transform to convert frequently-recurring character sequences into strings of identical letters and it then applies move-to-front transform and Huffman coding. Bzip2s ancestor bzip used arithmetic coding instead of Huffman, the change was made because of a software patent restriction. Bzip2 performance is asymmetric, as decompression is relatively fast, as of May 2010, this functionality has not been incorporated into the main project. Like gzip, bzip2 is only a data compressor, thus the sequence AAAAAAABBBBCCCD is replaced with AAAA\3BBBB\0CCCD, where \3 and \0 represent byte values 3 and 0 respectively. Runs of symbols are always transformed after four consecutive symbols, even if the run-length is set to zero, in the worst case, it can cause an expansion of 1.25 and best case a reduction to <0.02. While the specification allows for runs of length 256–259 to be encoded. The author of bzip2 has stated that the RLE step was a mistake and was only intended to protect the original BWT implementation from pathological cases. This is the reversible block-sort that is at the core of bzip2, the block is entirely self-contained, with input and output buffers remaining the same size—in bzip2, the operating limit for this stage is 900 kB. For the block-sort, a matrix is created in which row i contains the whole of the buffer, following rotation, the rows of the matrix are sorted into alphabetic order. A 24-bit pointer is stored marking the position for when the block is untransformed. In practice, it is not necessary to construct the matrix, rather. The output buffer is the last column of the matrix, this contains the whole buffer, again, this transform does not alter the size of the processed block. Each of the symbols in use in the document is placed in an array, when a symbol is processed, it is replaced by its location in the array and that symbol is shuffled to the front of the array. The effect is that immediately recurring symbols are replaced by zero symbols, much natural data contains identical symbols that recur within a limited range
6.
Standard streams
–
In computer programming, standard streams are preconnected input and output communication channels between a computer program and its environment when it begins execution. The three I/O connections are called standard input, standard output and standard error, originally I/O happened via a physically connected system console, but standard streams abstract this. When a command is executed via a shell, the streams are typically connected to the text terminal on which the shell is running. More generally, a process will inherit the standard streams of its parent process. Users generally know standard streams as input and output channels that handle data coming from an input device, the data may be text with any encoding, or binary data. Streams may be used to chain applications, meaning the output of a program is used for input to another application, in many operating systems this is expressed by listing the application names, separated by the vertical bar character, for this reason often called the pipeline character. A well-known example is the use of an application, such as more. In most operating systems predating Unix, programs had to connect to the appropriate input and output devices. OS-specific intricacies caused this to be a programming task. One of Unixs several groundbreaking advances was abstract devices, which removed the need for a program to know or care what kind of devices it was communicating with, older operating systems forced upon the programmer a record structure and frequently non-orthogonal data semantics and device control. Unix eliminated this complexity with the concept of a data stream, a program may also write bytes as desired and need not declare how many there will be, or how they will be grouped. Another Unix breakthrough was to automatically associate input and output by default — the program did nothing to establish input and output for a typical input-process-output program. In contrast, previous operating systems usually required some—often complex—job control language to establish connections, since Unix provided standard streams, the Unix C runtime environment was obliged to support it as well. As a result, most C runtime environments, regardless of the operating system, standard input is stream data going into a program. The program requests data transfers by use of the read operation, not all programs require stream input. For example, the dir and ls programs may take command-line arguments, unless redirected, standard input is expected from the keyboard which started the program. The file descriptor for standard input is 0, the POSIX <unistd. h> definition is STDIN_FILENO, the corresponding <stdio. h> variable is FILE* stdin, similarly, standard output is the stream where a program writes its output data. The program requests data transfer with the write operation, for example, the file rename command is silent on success
7.
Tar (computing)
–
In computing, tar is a computer software utility for collecting many files into one archive file, often referred to as a tarball, for distribution or backup purposes. The name is derived from ape chive, as it was developed to write data to sequential I/O devices with no file system of their own. The archive data sets created by tar contain various file system parameters, such as name, time stamps, ownership, file access permissions, the command line utility was first introduced in the seventh edition of unix in January 1979, replacing the tp program. The file structure to store information was later standardized in POSIX. 1-1988 and later POSIX. 1-2001. Many historic tape drives read and write variable-length data blocks, leaving significant wasted space on the tape between blocks, some tape drives only support fixed-length data blocks. Also, when writing to any medium such as a filesystem or network, therefore, the tar command writes data in blocks of many 512 byte records. The user can specify a blocking factor, which is the number of records per block, a tar archive consists of a series of file objects, hence the popular term tarball, referencing how a tarball collects objects of all kinds that stick to its surface. Each file object includes any data, and is preceded by a 512-byte header record. The file data is written unaltered except that its length is rounded up to a multiple of 512 bytes. The original tar implementation did not care about the contents of the padding bytes, and left the buffer data unaltered, the end of an archive is marked by at least two consecutive zero-filled records. The final block of an archive is padded out to length with zeros. The file header record contains metadata about a file, to ensure portability across different architectures with different byte orderings, the information in the header record is encoded in ASCII. Thus if all the files in an archive are ASCII text files, the fields defined by the original Unix tar format are listed in the table below. The link indicator/file type table includes some modern extensions, when a field is unused it is filled with NUL bytes. The header uses 257 bytes, then is padded with NUL bytes to make it fill a 512 byte record, there is no magic number in the header, for file identification. Numeric values are encoded in octal numbers using ASCII digits, with leading zeroes, for historical reasons, a final NUL or space character should also be used. Thus although there are 12 bytes reserved for storing the file size and this gives a maximum file size of 8 gigabytes on archived files. To overcome this limitation, star in 2001 introduced a base-256 coding that is indicated by setting the bit of the leftmost byte of a numeric field
8.
Pipeline (Unix)
–
In Unix-like computer operating systems, a pipeline is a sequence of processes chained together by their standard streams, so that the output of each process feeds directly as input to the next one. The concept of pipelines was championed by Douglas McIlroy at Unixs ancestral home of Bell Labs, during the development of Unix and it is named by analogy to a physical pipeline. The standard shell syntax for pipelines is to list multiple commands, each process takes input from the previous process and produces output for the next process via standard streams. Pipes are unidirectional, data flows through the pipeline from left to right, all widely used Unix shells have a special syntax construct for the creation of pipelines. In all usage one writes the commands in sequence, separated by the ASCII vertical bar character |, the shell starts the processes and arranges for the necessary connections between their standard streams. By default, the standard streams of the processes in a pipeline are not passed on through the pipe, instead. However, many shells have additional syntax for changing this behavior, in the csh shell, for instance, using |& instead of | signifies that the standard error stream should also be merged with the standard output and fed to the next process. The Bourne Shell can also merge standard error, using 2>&1, in the most commonly used simple pipelines the shell connects a series of sub-processes via pipes, and executes external commands within each sub-process. Thus the shell itself is doing no direct processing of the data flowing through the pipeline, however, its possible for the shell to perform processing directly, using a so-called mill, or pipemill. There are a couple of ways to avoid this behavior. First, some support an option to disable reading from stdin. Alternatively, if the drain does not need to read any input from stdin to do something useful, pipelines can be created under program control. The Unix pipe system call asks the operating system to construct a new anonymous pipe object and this results in two new, opened file descriptors in the process, the read-only end of the pipe, and the write-only end. The pipe ends appear to be normal, anonymous file descriptors, to avoid deadlock and exploit parallelism, the Unix process with one or more new pipes will then, generally, call fork to create new processes. Each process will then close the end of the pipe that it not be using before producing or consuming any data. Alternatively, a process might create a new thread and use the pipe to communicate between them, named pipes may also be created using mkfifo or mknod and then presented as the input or output file to programs as they are invoked. They allow multi-path pipes to be created, and are effective when combined with standard error redirection. Instead, the output of the program is held in the buffer
9.
University of Utah
–
The University of Utah is a public coeducational space-grant research university in Salt Lake City, Utah, United States. As the states flagship university, the university more than 100 undergraduate majors. The university is classified in the highest ranking, R-1, Doctoral Universities – Highest Research Activity by the Carnegie Classification of Institutions of Higher Education, the Carnegie Classification also considers the university as selective, which is its second most selective admissions category. Quinney College of Law and the School of Medicine, Utahs only medical school, as of Fall 2015, there are 23,909 undergraduate students and 7,764 graduate students, for an enrollment total of 31,673. The university was established in 1850 as the University of Deseret by the General Assembly of the provisional State of Deseret and it received its current name in 1892, four years before Utah attained statehood, and moved to its current location in 1900. The university ranks among the top 50 U. S. universities by total research expenditures with over $486 million spent in 2014, in addition, the universitys Honors College has been reviewed among 50 leading national Honors Colleges in the U. S. The university has also ranked the 12th most ideologically diverse university in the country. The universitys athletic teams, the Utes, participate in NCAA Division I athletics as a member of the Pac-12 Conference and its football team has received national attention for winning the 2005 Fiesta Bowl and the 2009 Sugar Bowl. A Board of Regents was organized by Brigham Young to establish a university in the Salt Lake Valley, early classes were held in private homes or wherever space could be found. The university closed in 1853 due to lack of funds and lack of feeder schools, the university moved out of the council house into the Union Academy building in 1876 and into Union Square in 1884. Additional Fort Douglas land has granted to the university over the years. Upon his death in 1900, Dr. John R. Park bequeathed his fortune to the university. One third of the faculty resigned in protest of these dismissals, the controversy was largely resolved when Kingsbury resigned in 1916, but university operations were again interrupted by World War I, and later The Great Depression and World War II. Student enrollment dropped to a low of 3,418 during the last year of World War II, ray Olpin made substantial additions to campus following the war, and enrollment reached 12,000 by the time he retired in 1964. Growth continued in the decades as the university developed into a research center for fields such as computer science. During the 2002 Winter Olympics, the university hosted the Olympic Village, the University of Utah Asia Campus opened as an international branch campus in the Incheon Global Campus in Songdo, Incheon, South Korea in 2014. Three other European and American universities are also participating, the Asia Campus was funded by the South Korean government. Campus takes up 1,534 acres, including the Health Sciences complex, Research Park and it is located on the east bench of the Salt Lake Valley, close to the Wasatch Range and approximately 2 miles east of downtown Salt Lake City
10.
GIF
–
The format supports up to 8 bits per pixel for each image, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It also supports animations and allows a separate palette of up to 256 colors for each frame, GIF images are compressed using the Lempel–Ziv–Welch lossless data compression technique to reduce the file size without degrading the visual quality. This compression technique was patented in 1985, controversy over the licensing agreement between the software patent holder, Unisys, and CompuServe in 1994 spurred the development of the Portable Network Graphics standard. By 2004 all the relevant patents had expired, CompuServe introduced the GIF format in 1987 to provide a color image format for their file downloading areas, replacing their earlier run-length encoding format, which was black and white only. The original version of the GIF format was called 87a, in 1989, CompuServe released an enhanced version, called 89a, which added support for animation delays, transparent background colors, and storage of application-specific metadata. The 89a specification also supports incorporating text labels as text, but as there is control over display fonts. The two versions can be distinguished by looking at the first six bytes of the file, which, CompuServe encouraged the adoption of GIF by providing downloadable conversion utilities for many computers. By December 1987, for example, an Apple IIGS user could view pictures created on an Atari ST or Commodore 64, GIF was one of the first two image formats commonly used on Web sites, the other being the black-and-white XBM. In September 1995 Netscape Navigator 2.0 added Animated GIF support, the feature of storing multiple images in one file, accompanied by control data, is used extensively on the Web to produce simple animations. As a noun, the word GIF is found in the editions of many dictionaries. In 2012, the American wing of the Oxford University Press recognized GIF as a verb as well, meaning to create a GIF file, as in GIFing was perfect medium for sharing scenes from the Summer Olympics. The presss lexicographers voted it their word of the year, saying that GIFs have evolved into a tool with serious applications including research, in May 2015 Facebook added GIF support, even though they originally didnt support them on their site. The creators of the format pronounced the word as jif with a soft G /ˈdʒɪf/ as in gin, the word is now also widely pronounced with a hard G /ˈɡɪf/ as in gift. The American Heritage Dictionary cites both, indicating jif as the pronunciation, while Cambridge Dictionary of American English offers only the hard-G pronunciation. Merriam-Websters Collegiate Dictionary and the OED cite both pronunciations, but place gif in the default position, the New Oxford American Dictionary gave only jif in its 2nd edition but updated it to jif, gif in its 3rd edition. The disagreement over the pronunciation led to heated Internet debate, the White House and TV program Jeopardy. Also waded into the debate during 2013, GIFs are suitable for sharp-edged line art with a limited number of colors. This takes advantage of the formats lossless compression, which favors flat areas of color with well defined edges
11.
Unisys
–
Unisys Corporation is an American global information technology company based in Blue Bell, Pennsylvania, that provides a portfolio of IT services, software, and technology. It is the proprietor of the Burroughs and UNIVAC line of computers. Unisys traces its roots back to the founding of American Arithmometer Company in 1886, Unisys predecessor companies also include the Eckert–Mauchly Computer Corporation, which developed the worlds first commercial digital computers, the BINAC and the UNIVAC. In September 1986 Unisys was formed through the merger of the mainframe corporations Sperry and Burroughs, the name was chosen from over 31,000 submissions in an internal competition when Chuck Ayoub submitted the word Unisys which was composed of parts of the words united, information and systems. The merger was the largest in the industry at the time. At the time of the merger, Unisys had approximately 120,000 employees, michael Blumenthal became CEO and Chairman after the merger and resigned in 1990 after several years of losses. James Unruh, became the new CEO and Chairman after Blumenthals departure and continued in that role until 1997, by 1997, layoffs had reduced world-wide employee count to approximately 30,000. In addition to hardware, both Burroughs and Sperry had a history of working on U. S. government contracts, Unisys continues to provide hardware, software, and services to various government agencies. In 1988 the company acquired Convergent Technologies, makers of CTOS, joseph McGrath served as CEO and President from January,2005, until September,2008, he was never named chairman. On October 7,2008, J. Edward Coleman replaced J. McGrath as CEO and was named Chairman of the board as well. On November 10,2008, the company was removed from the S&P500 index as the market capitalization of the company had fallen below the S&P500 minimum of $4 billion. On October 6,2014, Unisys announced that Coleman would leave the company effective December 1,2014, Unisys share price immediately fell when this news became public. On January 1,2015, Unisys officially named Peter Altabef as its new president and CEO, paul Weaver, who was formerly Lead Independent Director, was named Chairman. Unisys promotes itself as an information technology company that solves complex IT challenges for some of the world’s largest companies. Including the CIA, FBI, INS, ICE, and the U. S. military, the company maintains a portfolio of over 1,500 U. S. and non-U. S. The companys mainframe line, Clearpath, is capable of running not only mainframe software, the Clearpath system is available in either a UNISYS 2200-based system or an MCP-based system. In 2014, Unisys phased out its CMOS processors, completing the migration of its ClearPath mainframes to Intel x86 chips, the company announced its new ClearPath Dorado 8380 and 8390 systems in May,2015, its most powerful Dorado systems ever. In 2014, CRN ranked Unisys Stealth on its list of Top 10 products for combatting advanced persistent threat, Platform among the Top 10 coolest servers of 2014
12.
Usenet
–
Usenet is a worldwide distributed discussion system available on computers. It was developed from the general-purpose UUCP dial-up network architecture, tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980. Users read and post messages to one or more categories, known as newsgroups, Usenet resembles a bulletin board system in many respects and is the precursor to Internet forums that are widely used today. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially, the name comes from the term users network. One notable difference between a BBS or web forum and Usenet is the absence of a server and dedicated administrator. Usenet is distributed among a large, constantly changing conglomeration of servers that store, individual users may read messages from and post messages to a local server operated by a commercial usenet provider, their Internet service provider, university, employer, or their own server. Usenet has significant cultural importance in the world, having given rise to, or popularized, many widely recognized concepts and terms such as FAQ, flame. The name Usenet emphasized its creators hope that the USENIX organization would take a role in its operation. The articles that users post to Usenet are organized into topical categories called newsgroups, for instance, sci. math and sci. physics are within the sci. * hierarchy, for science. Or, talk. origins and talk. atheism are in the talk. * hierarchy, when a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read. In most newsgroups, the majority of the articles are responses to some other article, the set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into threads and subthreads, when a user posts an article, it is initially only available on that users news server. Each news server talks to one or more servers and exchanges articles with them. In this fashion, the article is copied from server to server, the later peer-to-peer networks operate on a similar principle, but for Usenet it is normally the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were much slower and not always available, many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out. This is largely because the POTS network was used for transfers. The format and transmission of Usenet articles is similar to that of Internet e-mail messages, today, Usenet has diminished in importance with respect to Internet forums, blogs and mailing lists. The groups in alt. binaries are still used for data transfer
13.
Linux
–
Linux is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on September 17,1991 by Linus Torvalds, the Free Software Foundation uses the name GNU/Linux to describe the operating system, which has led to some controversy. Linux was originally developed for computers based on the Intel x86 architecture. Because of the dominance of Android on smartphones, Linux has the largest installed base of all operating systems. Linux is also the operating system on servers and other big iron systems such as mainframe computers. It is used by around 2. 3% of desktop computers, the Chromebook, which runs on Chrome OS, dominates the US K–12 education market and represents nearly 20% of the sub-$300 notebook sales in the US. Linux also runs on embedded systems – devices whose operating system is built into the firmware and is highly tailored to the system. This includes TiVo and similar DVR devices, network routers, facility automation controls, televisions, many smartphones and tablet computers run Android and other Linux derivatives. The development of Linux is one of the most prominent examples of free, the underlying source code may be used, modified and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License. Typically, Linux is packaged in a known as a Linux distribution for both desktop and server use. Distributions intended to run on servers may omit all graphical environments from the standard install, because Linux is freely redistributable, anyone may create a distribution for any intended use. The Unix operating system was conceived and implemented in 1969 at AT&Ts Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, first released in 1971, Unix was written entirely in assembly language, as was common practice at the time. Later, in a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie, the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, as a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs, freed of the legal obligation requiring free licensing, the GNU Project, started in 1983 by Richard Stallman, has the goal of creating a complete Unix-compatible software system composed entirely of free software. Later, in 1985, Stallman started the Free Software Foundation, by the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers, daemons, and the kernel were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has also stated that if 386BSD had been available at the time, although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000
14.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers
15.
Berkeley Software Distribution
–
Berkeley Software Distribution is a Unix operating system derivative developed and distributed by the Computer Systems Research Group of the University of California, Berkeley, from 1977 to 1995. Today the term BSD is often used non-specifically to refer to any of the BSD descendants which together form a branch of the family of Unix-like operating systems, operating systems derived from the original BSD code remain actively developed and widely used. Historically, BSD has been considered a branch of Unix, Berkeley Unix, because it shared the initial codebase, in the 1980s, BSD was widely adopted by vendors of workstation-class systems in the form of proprietary Unix variants such as DEC ULTRIX and Sun Microsystems SunOS. This can be attributed to the ease with which it could be licensed, FreeBSD, OpenBSD, NetBSD, Darwin, and PC-BSD. The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the system, allowing researchers at universities to modify. A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project, also in 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system, graduate students Chuck Haley and Bill Joy improved Thompsons Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution, 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out, some 75 copies of 2BSD were sent out by Bill Joy. 2. 9BSD from 1983 included code from 4. 1cBSD, the most recent release,2. 11BSD, was first issued in 1992. As of 2008, maintenance updates from volunteers are still continuing, a VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAXs virtual memory capabilities. 3BSD was also alternatively called Virtual VAX/UNIX or VMUNIX, and BSD kernel images were normally called /vmunix until 4. 4BSD, 4BSD offered a number of enhancements over 3BSD, notably job control in the previously released csh, delivermail, reliable signals, and the Curses programming library. In a 1985 review of BSD releases, John Quarterman et al, many installations inside the Bell System ran 4. 1BSD. 4. 1BSD was a response to criticisms of BSDs performance relative to the dominant VAX operating system, the 4. 1BSD kernel was systematically tuned up by Bill Joy until it could perform as well as VMS on several benchmarks. Back at Bell Labs,4. 1cBSD became the basis of the 8th Edition of Research Unix, to guide the design of 4. The committee met from April 1981 to June 1983, apart from the Fast File System, several features from outside contributors were accepted, including disk quotas and job control. Sun Microsystems provided testing on its Motorola 68000 machines prior to release, the official 4. 2BSD release came in August 1983. On a lighter note, it marked the debut of BSDs daemon mascot in a drawing by John Lasseter that appeared on the cover of the printed manuals distributed by USENIX
16.
Image compression
–
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the properties of image data to provide superior results compared with generic compression methods. Image compression may be lossy or lossless, lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially used at low bit rates. Lossy methods are suitable for natural images such as photographs in applications where minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless, the selected colors are specified in the color palette in the header of the compressed image. Each pixel just references the index of a color in the color palette and this is the most commonly used method. In particular, a Fourier-related transform such as the Discrete Cosine Transform is widely used, N. Ahmed, T. Natarajan and K. R. Rao, Discrete Cosine Transform, IEEE Trans. The DCT is sometimes referred to as DCT-II in the context of a family of discrete cosine transforms, the more recently developed wavelet transform is also used extensively, followed by quantization and entropy coding. Other names for scalability are progressive coding or embedded bitstreams, despite its contrary nature, scalability also may be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing images while downloading them or for providing quality access to e. g. databases. There are several types of scalability, Quality progressive or layer progressive, resolution progressive, First encode a lower image resolution, then encode the difference to higher resolutions. Component progressive, First encode grey, then color, certain parts of the image are encoded with higher quality than others. This may be combined with scalability, compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, small preview images, Compression algorithms require different amounts of processing power to encode and decode. Some high compression algorithms require high processing power, the quality of a compression method often is measured by the Peak signal-to-noise ratio. Image compression from MIT OpenCourseWare Image Coding Fundamentals A study about image compression Data Compression Basics FAQ, from comp. compression IPRG Open group related to image processing research resources
17.
Free software
–
The right to study and modify software entails availability of the software source code to its users. This right is conditional on the person actually having a copy of the software. Richard Stallman used the existing term free software when he launched the GNU Project—a collaborative effort to create a freedom-respecting operating system—and the Free Software Foundation. The FSFs Free Software Definition states that users of software are free because they do not need to ask for permission to use the software. Free software thus differs from proprietary software, such as Microsoft Office, Google Docs, Sheets, and Slides or iWork from Apple, which users cannot study, change, freeware, which is a category of freedom-restricting proprietary software that does not require payment for use. For computer programs that are covered by law, software freedom is achieved with a software license. Software that is not covered by law, such as software in the public domain, is free if the source code is in the public domain. Proprietary software, including freeware, use restrictive software licences or EULAs, Users are thus prevented from changing the software, and this results in the user relying on the publisher to provide updates, help, and support. This situation is called vendor lock-in, Users often may not reverse engineer, modify, or redistribute proprietary software. Other legal and technical aspects, such as patents and digital rights management may restrict users in exercising their rights. Free software may be developed collaboratively by volunteer computer programmers or by corporations, as part of a commercial, from the 1950s up until the early 1970s, it was normal for computer users to have the software freedoms associated with free software, which was typically public domain software. Software was commonly shared by individuals who used computers and by manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to exchange of software. As software was written in an interpreted language such as BASIC. Software was also shared and distributed as printed source code in computer magazines and books, in United States vs. IBM, filed January 17,1969, the government charged that bundled software was anti-competitive. While some software might always be free, there would henceforth be an amount of software produced primarily for sale. In the 1970s and early 1980s, the industry began using technical measures to prevent computer users from being able to study or adapt the software as they saw fit. In 1980, copyright law was extended to computer programs, Software development for the GNU operating system began in January 1984, and the Free Software Foundation was founded in October 1985
18.
7-Zip
–
7-Zip is an open-source file archiver, an application used primarily to compress files. 7-Zip uses its own 7z archive format, but can read, the program can be used from a command-line interface as the command p7zip, as a graphical user interface, or with a window-based shell integration. 7-Zip began in 1999 and is developed by Igor Pavlov, most of the source code is under the GNU LGPL license. The unRAR code is under the GNU LGPL with an unRAR restriction, by default, 7-Zip creates 7z-format archives with a. 7z file extension. Each archive can contain multiple directories and files, as a container format, security or size reduction are achieved using a stacked combination of filters. These can consist of pre-processors, compression algorithms, and encryption filters, the core 7z compression uses a variety of algorithms, the most common of which are bzip2, PPMd, LZMA2, and LZMA. Developed by Pavlov, LZMA is a new system, making its debut as part of the 7z format. LZMA uses an LZ-based sliding dictionary of up to 4 GB in size, the native 7z file format is open and modular. All filenames are stored as Unicode, the 7z file format specification is distributed with the programs source code, in the doc subdirectory. 7-Zip supports a number of compression and non-compression archive formats, including ZIP, Gzip, bzip2, xz, tar. 7-Zip supports the ZIPX format for unpacking only and it has had this support since at least version 9.20, which was released in late 2010. 7-Zip can open some MSI files, allowing access to the meta-files within along with the main contents, some Microsoft CAB and NSIS installer formats can be opened. Similarly, some Microsoft executable programs that are self-extracting archives or otherwise contain archived content may be opened as archives. When compressing ZIP or gzip files, 7-Zip uses its own DEFLATE encoder, which may achieve higher compression, the 7-Zip deflate encoder implementation is available separately as part of the AdvanceCOMP suite of tools. The decompression engine for RAR archives was developed using source code of the unRAR program, 7-Zip v15.06 and later support the RAR5 file format. It can also unpack a few types of backup created by Android stock recovery image, 7-Zip comes with a file manager along with the standard archiver tools. The file manager, by default, displays hidden files because it not follow Windows Explorers policies. The tabs show name, modification time, original and compressed sizes, attributes, when going up one directory on the root, all drives, removable or internal appear
19.
Archive Manager
–
Archive Manager is the archive manager of the GNOME desktop environment. Archive Manager can, Create and modify archives, view the content of an archive. View a file contained in the archive, backend programs are needed to use Archive Manager as a frontend. This can be set however using gconf-editor, Comparison of archive formats Comparison of file archivers Official website Manuals Source code
20.
Ark (software)
–
Ark is a file archiver and compressor developed by KDE and included in the KDE Applications software bundle. It supports various common archive and compression formats including zip, 7z, rar, lha, Ark uses libarchive and karchive to support tar-based archives, and is also a frontend for several command-line archivers. Ark can be integrated into Konqueror, through KParts technology, after installing it, files can be added or extracted in/from the archives using Konquerors context menus. Support for editing files in archive with external programs, files can also be deleted from the archive. Archive creation with drag and drop from file manager, comparison of file archivers KDE Utilities / Ark
21.
Haiku Applications
–
Haiku is a free and open-source operating system compatible with the now discontinued BeOS. Haiku comes with a set of small yet essential applications. Core applications are a part of Haiku, as they serve many different purposes. Haiku also has some 3rd party applications to supplement the core applications, the applications are usually executed by double clicking on the icon. ActivityMonitor is a monitor, making it possible for users to analyse the performance. They can view such as the memory being used, cached memory. ActivityMonitor presents the information in form, which makes it easy for the users to understand their computer performance. Haiku CharacterMap is a Character Map utility that shows the UTF-8 Code of every character a font supports, the font size can be changed and the user can drag any character into the StyledEdit application or the character can be copied and pasted where needed. CodyCam is a photo and video program that makes it possible for the user to take pictures at a specified interval from a connected webcam or any other video-in device. Users can specify the name, the image and video format, the capture interval. DeskCalc is a simple Calculator that can perform different tasks related to mathematics and it has a simple interface and on the front screen it has some functions or buttons which can perform addition, subtraction, multiplication and division but it is not restricted to these only. DeskCalc can also perform tasks related to trigonometry, the type of the calculator can also be changed into scientific, radical or any other type to use it for different purposes. DiskProbe is a Hex editor to view and alter data of a file or on a device on a byte-level, at the start, a file has to be chosen by the user to edit it further in the application. The main view shows a block of data and its size can be changed, diskUsage is a Disk space analyser application that provides graphical information to the user showing the disk space being consumed. Scan button has to be clicked once the user executes the application in order to start the scanning process, the application provides a pie chart showing the volume of space being used. Users can understand the graph and can utilise that data in different ways. Every segment of the circle represents a folder and when the cursor is taken one of them. The user can choose to re-scan the folder if they want to and they can also open it with another application or can fetch further information about the memory consumed
22.
KGB Archiver
–
KGB Archiver is a file archiver and data compression utility based on the PAQ6 compression algorithm. Written in Microsoft Visual C++ by Tomasz Pawlak, KGB Archiver is designed to achieve a high compression ratio. As a consequence, the program is relatively memory- and CPU-intensive, KGB Archiver is free and open-source software released under the terms of the GNU General Public License. Version 2 beta 2 is available for Microsoft Windows and a version of KGB Archiver 1.0 is available for Unix-like operating systems. KGB Archiver is one of the few applications that works with the PAQ algorithm for making its KGB files and it has ten levels of compression, from very weak to maximal. However, at higher levels, the time required to compress a file increases significantly. The official website is now offline, the minimum requirements for running KGB Archiver are,1.5 GHz processor 256 MB RAM Supports native. kgb files and. RAR Comparison of file archivers KGB Archiver on SourceForge. net
23.
PAQ
–
PAQ is a series of lossless data compression archivers that have gone through collaborative development to top rankings on several benchmarks measuring compression ratio. Specialized versions of PAQ have won the Hutter Prize and the Calgary Challenge, PAQ is free software distributed under the GNU General Public License. PAQ uses a context mixing algorithm, unlike PPM, a context doesnt need to be contiguous. All PAQ versions predict and compress one bit at a time, once the next-bit probability is determined, it is encoded by arithmetic coding. There are three methods for combining predictions, depending on the version, In PAQ1 through PAQ3, each prediction is represented as a pair of bit counts and these counts are combined by weighted summation, with greater weights given to longer contexts. In PAQ4 through PAQ6, the predictions are combined as before, in PAQ7 and later, each model outputs a probability rather than a pair of counts. The probabilities are combined using a neural network. PAQ1SSE and later versions postprocess the prediction using secondary symbol estimation, the combined prediction and a small context are used to look up a new prediction in a table. After the bit is encoded, the entry is adjusted to reduce the prediction error. SSE stages can be pipelined with different contexts or computed in parallel with the outputs averaged and it is always possible to find an x such that the length of x is at most one byte longer than the Shannon limit, −log2P bits. The length of s is stored in the archive header, the arithmetic coder in PAQ is implemented by maintaining for each prediction a lower and upper bound on x, initially. After each prediction, the current range is split into two parts in proportion to P and P, the probability that the bit of s will be a 0 or 1 respectively. The next bit is encoded by selecting the corresponding subrange to be the new range. The number x is decompressed back to string s by making a series of bit predictions. The range is split as with compression, the portion containing x becomes the new range, and the corresponding bit is appended to s. In PAQ, the lower and upper bounds of the range are represented in 3 parts, the most significant base-256 digits are identical, so they can be written as the leading bytes of x. The next 4 bytes are kept in memory, such that the byte is different. The trailing bits are assumed to be all zeros for the lower bound, Compression is terminated by writing one more byte from the lower bound
24.
PeaZip
–
PeaZip is a free and open-source file manager and file archiver for Microsoft Windows, Linux and BSD made by Giorgio Tani. It supports its native PEA archive format and other mainstream formats, PeaZip is mainly written in Free Pascal, using Lazarus. PeaZip is released under the terms of the GNU Lesser General Public License, PeaZip allows users to run extracting and archiving operations automatically using command-line generated exporting the job defined in the GUI front-end. It can also create, edit and restore an archives layout for speeding up archiving or backup operations definition, in addition, the programs user interface can be customized. PeaZip is available for IA-32 and x86-64 as natively standalone, portable application and as installable package for Microsoft Windows, Linux and it is available also as PortableApps package. Along with more popular and general-purpose archive formats like 7z, Tar, PeaZip supports the PAQ and LPAQ formats. PeaZip supports encryption with AES 256-bit cipher in 7z, and ZIP archive formats, in PeaZips native PEA format, and in FreeArcs ARC format, supported ciphers are AES 256-bit, Blowfish, Twofish 256 and Serpent 256. The graphical frontends progress bar is less reliable than the native consoles progress indicator for the various backend utilities, PeaZip does not support editing files inside archives It also does not support adding files to subfolders in an already created archive. Adding files to archives always adds them to the root directory, PEA, an acronym for Pack Encrypt Authenticate, is an archive file format. It is a general purpose archiving format featuring compression and multiple volume output and it was developed in conjunction with the PeaZip file archiver. PeaZip and Universal Extractor support the PEA archive format. DLL2.6.0, FreeArcs ARC Prior to release 5. From release 5.3 on, PeaZip no longer has ad-supported bundle, PeaZip Portable and PeaZip for Linux packages never featured an ad-supported bundle. net
25.
Xarchiver
–
Xarchiver is a front-end to various command line archiving tools for Linux and BSD operating systems, designed to be independent of the desktop environment. It is the default archiving application of Xfce and LXDE and it uses the GTK+2 toolkit to provide the program interface, therefore it is capable of running on any system where GTK+2 support exists. A large number of applications also use the toolkit, so support is widespread among other Linux distributions irrespective of their specific desktop solution. Supported formats at this time with an installed program are 7z, ARJ, bzip2, gzip, LHA, lzma, lzop, RAR, RPM, DEB, tar. Xarchiver uses the Direct Save Protocol XDS for drag and drop file saving, the program acts as a front-end for various commonly installed libraries dealing with the supported compression formats. Xarchiver cant create archives whose archiver is not installed, currently, the Xfce master branch of Xarchiver is being continued at GitHub. Comparison of file archivers Official website Continuation at GitHub Xarchiver on SourceForge. net Xarchiver
26.
Zipeg
–
Zipeg is an open source free software that extracts files from compressed archives like ZIP, RAR, 7z and others, some of which are rarely used types. Zipeg works under Mac OS X and Windows and it is best known for its file preview ability. It is incapable of compressing files, although it is able to extract compressed ones, Zipeg is built on top of the 7-Zip backend. Its UI is implemented in Java and is open source, Zipeg automatically detects filenames in national alphabets and correctly translates them to Unicode. Zipeg reads Exif thumbnails from JPEG digital photographs and uses them for tool tip style preview, zip Data compression Zipeg review from c|net Zipeg review by Macworld Zipeg main site http, //www. zipeg. com/ Zipeg source code http, //zipeg. googlecode. com/
27.
Filzip
–
Filzip is a freeware file archiver program for the Microsoft Windows platform. It was written by Philipp Engel, while free, the author does request donations to help the cost of development and reward him for his work. As of April 2015, no new version has been released since version 3.0.6, Filzip has been presumed unmaintained the software development. The program has been localized to more than twenty languages, Filzip supports seven different archive formats, allowing the user to add and extract files from the archives. These include ZIP, BH, CAB, JAR, LHA, TAR, a handful of other formats are supported for extraction only, including ACE, ARC, ARJ, RAR, and ZOO. Files within most formats can be viewed without explicitly unpacking them, ZIP files may be spanned, that is, written to any number of files with a fixed maximum size so that they can be placed on removable media. The program has integration, and can create self-extracting executable archives for redistribution without licensing fees, CNET rated it 4/5 stars and wrote, Easy program operation sets this freeware file compression tool apart from the crowded genre. Comparison of file archivers Comparison of archive formats List of archive formats ZIP Official website File compression review including Filzip Review of Filzip at DonationCoder. com
28.
StuffIt Expander
–
StuffIt Expander is a proprietary, freeware, closed source, decompression software utility developed by Allume Systems. It runs on the classic Mac OS, macOS, Microsoft Windows, as of November 2010, the latest Macintosh version is 2011, which requires Mac OS X v10.5 or later. Version 2010 runs on Mac OS X v10.4, version 10.0.2 runs on Mac OS X v10.3, the latest version for Mac OS9 is 7.0.3. As of September 2011, the Linux version is no longer available for download from the official page, StuffIt has been a target of criticism and dissatisfaction from Mac users in the past as the file format changes frequently, notably during the introduction of StuffIt version 5.0. Expander 5.0 contained many bugs, and its format was not readable by the earlier version 4.5. The latest stand-alone version for Windows is 2009, contrary to the version 12.0, which was only able to decompress the newer. sitx archives, version 2009 claims to be able to decompress over 30 formats, some listed below. The executables require both, the. NET v2.0 framework and MSVC2008 runtimes, the previous stand-alone version able to decompress. sit and other classic Mac OS-specific archives was 7.02, distributed with StuffIt v7.0. x for Windows. From versions 7.5. x to 11 the Expander capabilities were actually performed by the StuffIt Standard Edition, to start StuffIt in Expander mode the following command line switches were used, -expand -uiexpander. Note that the registration reminder dialogue box is not shown in this case, there is also a command line DOS application called UNSTUFF v1.1 that allows decompression of. sit files. x StuffIt v1.5.1 to 8.0
29.
TUGZip
–
TUGZip is a freeware file archiver for Microsoft Windows. It handles a variety of archive formats, including some of the commonly used ones like zip, rar, gzip, bzip2, sqx. It can also view disk image files like BIN, C2D, IMG, ISO, tugZip repairs corrupted ZIP archives and can encrypt files with 6 different algorithms. Since the release of TUGZip 3.5.0.0, development has been suspended due to lack of time from Kindahls side
30.
ZipGenius
–
ZipGenius is a freeware file archiver developed by Matteo Riso of M. Dev Software for Microsoft Windows. It is capable of handling nearly two dozen file formats, including all the most common formats, as well as password-protect archives and it is presented in two editions, standard and suite. While the suite edition includes optional modules of the ZipGenius project, ZipGenius was first released as Mr. Zip 98 to a strict group of users, and renamed ZipGenius in March 1998. In April 1999, its first public release was made, zipGeniuss latest release is version 6.3.2.3112, containing a large number of fixes, the majority of which are bug fixes not visible at first glance. ZipGenius features include, support for TWAIN devices, file extraction to CD/DVD burner, an FTP client, ZGAlbum allows users to create slideshows with pictures, which can be loaded from a folder or imported through a TWAIN device. ZipGenius also fully integrates with Windows Explorer shell, user interface and functions have been placed in multiple locations. Program functions can be accessed either from a menu of buttons on the main toolbar and they can be customized to suit user preferences. ZipGenius 6 is capable of opening a variety of commonly used with Linux, such as RPM, TAR, TAR. GZ, TGZ, GZ. As regards data security, the archive history file is encrypted, the most recently used files list can be enabled or disabled, users can create self-locking/self-erasing archives and can choose to encrypt files with one of four algorithms. Also, CZIP files no longer depends on ZipGenius because users can create executable CZIP files which allow the user to send encrypted archives to people who do not use ZipGenius. ZipGenius is reported to be still under development and it is claimed that Matteo Riso, the main developer, is focusing on next release, whose codename is Miky2007, and that a new version of the software will be still branded as ZipGenius. ZipGenius is hosted at zipgenius. it, but in 2008 the WinInizio Software team put zipgenius. com online to offer a website for users not speaking Italian. WinInizio. it hosts also a support section — in both Italian and English. On July 1,2009, website ZipGenius. it has been totally redesigned upon the MODx web application, on July 6,2009, ZipGenius 6.1.0.1010 was released to the public. ZipGenius is able to handle the open source UPX executable compressor, if UPX. exe is in ZipGenius program folder, users will get an extra compression level while creating new ZIP archives, Brutal+UPX. Many executable files are distributed as they are after code compilation, ZipGenius supports the 7-Zip compression format, conceived by Igor Pavlov, which can compress files more than standard ZIP or RAR compression formats. ZipGenius complies with this feature, and allows users to manage those files as if they were common ZIP files, in addition, users can optimize them, OpenOffice. org does not use the maximum compression level available for. ZIP format, but ZipGenius does. When users optimize an OpenOffice. org document with ZipGenius, the gain is from 5 to 10 KB of size reduction because ZipGenius will recompress the document with the Brutal compression level, list of file archivers Comparison of file archivers Official international website ZipGenius homepage