Sega Net Link
Sega Net Link is an attachment for the Sega Saturn game console to provide Saturn users with internet access and access to email through their console. The unit was released in October 1996; the Sega Net Link fit into the Sega Saturn cartridge port and consisted of a 28.8 kbit/s modem, a custom chip to allow it to interface with the Saturn, a browser developed by Planetweb, Inc. The unit sold for US$400 bundled with a Sega Saturn; the Net Link connected to the internet through standard dial-up services. Unlike other online gaming services in the US, one does not connect to a central service, but instead tells the dial-up modem connected to the Saturn's cartridge slot to call to the person with whom one wishes to play. Since it requires no servers to operate, the service can operate as long as at least two users have the necessary hardware and software, as well as a phone line. In Japan, gamers did connect through a centralized service known as SegaNet, which would be taken offline and converted for Dreamcast usage.
According to Yutaka Yamamoto, Sega of America's director of new technology, the Saturn's design allowed it to access the internet purely through software: "Sega engineers always felt the Saturn would be good for multimedia applications as well as game playing. So they developed a kernel in the operating system to support communications tasks."While the Net Link was not the first accessory which allowed console gamers in North America to play video games online, it was the first to allow players to use their own Internet Service Provider to connect. While Sega recommended that players use Concentric, the Sega Net Link enabled players to choose any ISP, within its technical specifications; the device was capable of connecting at a 28.8 kilobit/s connection in America and 14.4 kbit/s in Japan. However, it suffered from memory limitations; this makes it impossible to download audio or video clips, save e-mail messages, or put loaded web pages into cache. In Japan, the Net Link required the use of smartcards with prepaid credits.
The Saturn had a floppy printer cable converter which could be used with the Net Link. A web browser from Planetweb was included, a mouse and keyboard adapter were available to simplify navigation. Sega released a dedicated Saturn mouse and Saturn keyboard. In addition, to allow users to browse with just the Saturn joypad, Sega produced a series of CDs containing hundreds of website addresses; the browser included a space magnifying function. The Planetweb browser was written in C, runs on just 570 KB, whereas a typical PC browser of the time used up about 6 MB. At the time most television screens ran at a lower resolution than computer monitors, so the browser used anti-aliasing to smooth out the edges of onscreen text characters. Five games were released. Launching at 15,000 yen in Japan and $199 in the USA, it was considered inexpensive compared to competing online services, it was a runner-up for Electronic Gaming Monthly's Best Peripheral of 1996. Despite the media excitement over the device and its prominent appearance in Sega's marketing campaign, less than 1% of Saturn owners purchased the NetLink in 1996.
Over its lifetime, an estimated 50,000 NetLink units were sold in North America, half of Sega's original goal. Another 1,100 units were donated by Sega of America to schools, in partnership with the nonprofit group Projectneat. In 2017, fans were able to make the Netlink work a modern highspeed connection with VoIP; the Net Link Zone connected to an Internet Relay Chat server irc.sega.com, changed to the server irc0.dreamcast.com on the release of Sega's Dreamcast. These servers were run by Sega employees but were given over to be run by Net Link chat users Leo Daniels and Mark Leatherman. SegaNet was launched in 2000 for the Dreamcast; the European counterpart was called Dreamarena. Daytona USA CCE Net Link Edition Duke Nukem 3D Saturn Bomberman Sega Rally Virtual On Sega Meganet SegaNet Planetweb Learn Planetweb's Self-Download Feature Sega Saturn NetLink League: Information about the NetLink and a resource to find other NetLink players Netlink & Dreamcast Old Users @ Way2Live4U.com Official Sega Saturn NetLink VoIP Guide
The Linux kernel is a free and open-source, Unix-like operating system kernel. The Linux family of operating systems is based on this kernel and deployed on both traditional computer systems such as personal computers and servers in the form of Linux distributions, on various embedded devices such as routers, wireless access points, PBXes, set-top boxes, FTA receivers, smart TVs, PVRs, NAS appliances. While the adoption of the Linux kernel in desktop computer operating system is low, Linux-based operating systems dominate nearly every other segment of computing, from mobile devices to mainframes; as of November 2017, all of the world's 500 most powerful supercomputers run Linux. The Android operating system for tablet computers and smartwatches uses the Linux kernel; the Linux kernel was conceived and created in 1991 by Linus Torvalds for his personal computer and with no cross-platform intentions, but has since expanded to support a huge array of computer architectures, many more than other operating systems or kernels.
Linux attracted developers and users who adopted it as the kernel for other free software projects, notably the GNU Operating System, created as a free, non-proprietary operating system, based on UNIX as a by-product of the fallout of the Unix wars. The Linux kernel API, the application programming interface through which user programs interact with the kernel, is meant to be stable and to not break userspace programs; as part of the kernel's functionality, device drivers control the hardware. However, the interface between the kernel and loadable kernel modules, unlike in many other kernels and operating systems, is not meant to be stable by design; the Linux kernel, developed by contributors worldwide, is a prominent example of free and open source software. Day-to-day development discussions take place on the Linux kernel mailing list; the Linux kernel is released under the GNU General Public License version 2, with some firmware images released under various non-free licenses. In April 1991, Linus Torvalds, at the time a 21-year-old computer science student at the University of Helsinki, started working on some simple ideas for an operating system.
He started with a task switcher in a terminal driver. On 25 August 1991, Torvalds posted the following to comp.os.minix, a newsgroup on Usenet: I'm doing a operating system for 386 AT clones. This has been brewing since April, is starting to get ready. I'd like any feedback on things people like/dislike in minix. I've ported bash and gcc, things seem to work; this implies that I'll get something practical within a few months Yes - it's free of any minix code, it has a multi-threaded fs. It is NOT portable, it never will support anything other than AT-harddisks, as that's all I have:-(. It's in C, but most people wouldn't call what I write C, it uses every conceivable feature of the 386 I could find, as it was a project to teach me about the 386. As mentioned, it uses a MMU, for both paging and segmentation. It's the segmentation; some of my "C"-files are as much assembler as C. Unlike minix, I happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them. After that, many people contributed code to the project.
Early on, the MINIX community contributed code and ideas to the Linux kernel. At the time, the GNU Project had created many of the components required for a free operating system, but its own kernel, GNU Hurd, was incomplete and unavailable; the Berkeley Software Distribution had not yet freed itself from legal encumbrances. Despite the limited functionality of the early versions, Linux gained developers and users. In September 1991, Torvalds released version 0.01 of the Linux kernel on the FTP server of the Finnish University and Research Network. It had 10,239 lines of code. On 5 October 1991, version 0.02 of the Linux kernel was released. Torvalds assigned version 0 to the kernel to indicate that it was for testing and not intended for productive use. In December 1991, Linux kernel 0.11 was released. This version was the first to be self-hosted as Linux kernel 0.11 could be compiled by a computer running the same kernel version. When Torvalds released version 0.12 in February 1992, he adopted the GNU General Public License version 2 over his previous self-drafted license, which had not permitted commercial redistribution.
On 19 January 1992, the first post to the new newsgroup alt.os.linux was submitted. On 31 March 1992, the newsgroup was renamed comp.os.linux. The fact that Linux is a monolithic kernel rather than a microkernel was the topic of a debate between Andrew S. Tanenbaum, the creator of MINIX, Torvalds; this discussion is known as the Tanenbaum–Torvalds debate and started in 1992 on the Usenet discussion group comp.os.minix as a general debate about Linux and kernel architecture. Tanenbaum argued that microkernels were superior to monolithic kernels and that therefore Linux was obsolete. Unlike traditional monolithic kernels, device drivers in Linux are configured as loadable kernel modules and are loaded or unloaded while
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Wireless network interface controller
A wireless network interface controller is a network interface controller which connects to a wireless radio-based computer network, rather than a wired network, such as Token Ring or Ethernet. A WNIC, just like other NICs, works on Layer 2 of the OSI Model; this card uses an antenna to communicate via microwave radiation. A WNIC in a desktop computer is traditionally connected using the PCI bus. Other connectivity options are USB and PC card. Integrated WNICs are available. Early wireless network interface controllers were implemented on expansion cards that plugged into a computer bus; the low cost and ubiquity of the Wi-Fi standard means that many newer mobile computers have a wireless network interface built into the motherboard. The term is applied to IEEE 802.11 adapters. An 802.11 WNIC can operate in two modes known as infrastructure mode and ad hoc mode: Infrastructure mode In an infrastructure mode network the WNIC needs a wireless access point: all data is transferred using the access point as the central hub.
All wireless nodes in an infrastructure mode network connect to an access point. All nodes connecting to the access point must have the same service set identifier as the access point, if a kind of wireless security is enabled on the access point, they must share the same keys or other authentication parameters. Ad hoc mode In an ad hoc mode network the WNIC does not require an access point, but rather can interface with all other wireless nodes directly. All the nodes in an ad hoc network must have the same channel and SSID; the IEEE 802.11 standard sets out low-level specifications for how all 802.11 wireless networks operate. Earlier 802.11 interface controllers are only compatible with earlier variants of the standard, while newer cards support both current and old standards. Specifications used in marketing materials for WNICs include: Wireless data transfer rates. Wireless transmit power Wireless network standards 802.11g offers data transfer speeds equivalent to 802.11a – up to 54 Mbit/s – and the wider 300-foot range of 802.11b, is backward compatible with 802.11b.
Most Bluetooth cards do not implement any form of the 802.11 standard. Wireless range may be affected by objects in the way of the signal and by the quality of the antenna. Large electrical appliances, such as refrigerators, fuse boxes, metal plumbing, air conditioning units can impede a wireless network signal; the theoretical maximum range of IEEE 802.11 is only reached under ideal circumstances and true effective range is about half of the theoretical range. The maximum throughput speed is only achieved at close range; the reason is that wireless devices dynamically negotiate the top speed at which they can communicate without dropping too many data packets. In an 802.11 WNIC, the MAC Sublayer Management Entity can be implemented either in the NIC's hardware or firmware, or in host-based software, executed on the main CPU. A WNIC that implements the MLME function in hardware or firmware is called a FullMAC WNIC or a HardMAC NIC and a NIC that implements it in host software is called a SoftMAC NIC.
A FullMAC device hides the complexity of the 802.11 protocol from the main CPU, instead providing an 802.3 interface. FullMAC chips are used in mobile devices because: they are easier to integrate in complete products power is saved by having a specialized CPU perform the 802.11 processing. Popular example of FullMAC chips is the one implemented on the Raspberry Pi 3. Linux kernel's mac80211 framework provides capabilities for SoftMAC devices and additional capabilities for devices with limited functionality. FreeBSD supports SoftMAC drivers. List of device bandwidths Wi-Fi operating system support
Berkeley sockets is an application programming interface for Internet sockets and Unix domain sockets, used for inter-process communication. It is implemented as a library of linkable modules, it originated with the 4.2BSD Unix operating system, released in 1983. A socket is an abstract representation for the local endpoint of a network communication path; the Berkeley sockets API represents it as a file descriptor in the Unix philosophy that provides a common interface for input and output to streams of data. Berkeley sockets evolved with little modification from a de facto standard into a component of the POSIX specification. Therefore, the term POSIX sockets is synonymous with Berkeley sockets, they are known as BSD sockets, acknowledging the first implementation in the Berkeley Software Distribution. Berkeley sockets originated with the 4.2BSD Unix operating system as a programming interface. Only in 1989, could UC Berkeley release versions of its operating system and networking library free from the licensing constraints of AT&T Corporation's proprietary Unix.
All modern operating systems implement a version of the POSIX socket interface. It became the standard interface for connecting to the Internet; the Winsock implementation for MS Windows, developed by unaffiliated developers follows the standard. The BSD sockets API is written in the C programming language. Most other programming languages provide similar interfaces written as a wrapper library based on the C API; as the Berkeley socket API evolved and yielded the POSIX socket API, certain functions were deprecated or removed and replaced by others. The POSIX API is designed to be reentrant; the STREAMS-based Transport Layer Interface API offers an alternative to the socket API. However, many systems that provide the TLI API provide the Berkeley socket API. Non-Unix systems expose the Berkeley socket API with a translation layer to a native networking API. Plan 9 and Genode use file-system APIs with control files rather than file-descriptors; the Berkeley socket interface is defined in several header files.
The names and content of these files differ between implementations. In general, they include: The Berkeley socket API provides the following functions: socket creates a new socket of a certain type, identified by an integer number, allocates system resources to it. Bind is used on the server side, associates a socket with a socket address structure, i.e. a specified local IP address and a port number. Listen is used on the server side, causes a bound TCP socket to enter listening state. Connect is used on the client side, assigns a free local port number to a socket. In case of a TCP socket, it causes an attempt to establish a new TCP connection. Accept is used on the server side, it accepts a received incoming attempt to create a new TCP connection from the remote client, creates a new socket associated with the socket address pair of this connection. Send, recv and recvfrom are used for sending and receiving data; the standard functions write and read may be used. Close causes the system to release resources allocated to a socket.
In case of TCP, the connection is terminated. Gethostbyname and gethostbyaddr are used to resolve host addresses. IPv4 only. Select is used to suspend, waiting for one or more of a provided list of sockets to be ready to read, ready to write, or that have errors. Poll is used to check on the state of a socket in a set of sockets; the set can be tested to see if an error occurred. Getsockopt is used to retrieve the current value of a particular socket option for the specified socket. Setsockopt is used to set a particular socket option for the specified socket; the function socket creates an endpoint for communication and returns a file descriptor for the socket. It uses three arguments: domain. For example: AF_INET for network protocol IPv4 AF_INET6 for IPv6 AF_UNIX for local socket type, one of: SOCK_STREAM SOCK_DGRAM SOCK_SEQPACKET SOCK_RAW protocol specifying the actual transport protocol to use; the most common are IPPROTO_TCP, IPPROTO_SCTP, IPPROTO_UDP, IPPROTO_DCCP. These protocols are specified in file netinet/in.h.
The value 0 may be used to select a default protocol from type. The function returns -1. Otherwise, it returns an integer representing the newly assigned descriptor. Bind associates a socket with an address; when a socket is created with socket, it is only given a protocol family, but not assigned an address. This association must be performed; the function has three arguments: sockfd, a descriptor representing the socket my_addr, a pointer to a sockaddr structure representing the address to bind to. Addrlen, a field of type socklen_t specifying the size of the sockaddr structure. Bind returns 0 on -1 if an error occurs. After a socket has been associated with an address, listen prepares it for incoming connections. However, this is only necessary for the stream-oriented data modes, i.e. for socket types. Listen requires two arguments: a valid socket descriptor. Backlog, an integer representing the number of pending connections that can be queued up at any one time; the operating system
In computer science, inter-process communication or interprocess communication refers to the mechanisms an operating system provides to allow the processes to manage shared data. Applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as seen in distributed computing. Methods for doing IPC are divided into categories which vary based on software requirements, such as performance and modularity requirements, system circumstances, such as network bandwidth and latency. IPC is important to the design process for microkernels and nanokernels. Microkernels reduce the number of functionalities provided by the kernel; those functionalities are obtained by communicating with servers via IPC, increasing drastically the number of IPC compared to a regular monolithic kernel. Depending on the solution, an IPC mechanism may provide synchronization or leave it up to processes and threads to communicate amongst themselves.
While synchronization will include some information it is not an information-passing communication mechanism per se. Examples of synchronization primitives are: Semaphore Spinlock Barrier Mutual exclusion Java's Remote Method Invocation ONC RPC XML-RPC or SOAP JSON-RPC Message Bus. NET Remoting The following are messaging and information systems that utilize IPC mechanisms, but don't implement IPC themselves: The following are platform or programming language-specific APIs: The following are platform or programming language specific-APIs that use IPC, but do not themselves implement it: Computer network programming Communicating Sequential Processes Data Distribution Service Protected procedure call Linux ipc man page describing System V IPC Windows IPC Unix Network Programming by W. Richard Stevens Interprocess Communication and Pipes in C DIPC, Distributed System V IPC
Red Hat, Inc. is an American multinational software company providing open-source software products to the enterprise community. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with other offices worldwide. Red Hat has become associated to a large extent with its enterprise operating system Red Hat Enterprise Linux and with the acquisition of open-source enterprise middleware vendor JBoss. Red Hat offers Red Hat Virtualization, an enterprise virtualization product. Red Hat provides storage, operating system platforms, applications, management products, support and consulting services. Red Hat creates and contributes to many free software projects, it has acquired several proprietary software product codebases through corporate mergers and acquisitions and has released such software under open-source licenses. As of March 2016, Red Hat is the second largest corporate contributor to the Linux kernel version 4.14 after Intel. On October 28, 2018, IBM announced its intent to acquire Red Hat for $34 billion.
In 1993, Bob Young incorporated the ACC Corporation, a catalog business that sold Linux and Unix software accessories. In 1994, Marc Ewing created his own Linux distribution. Ewing released the software in October, it became known as the Halloween release. Young bought Ewing's business in 1995, the two merged to become Red Hat Software, with Young serving as chief executive officer. Red Hat went public on August 11, 1999, achieving the eighth-biggest first-day gain in the history of Wall Street. Matthew Szulik succeeded Bob Young as CEO in December of that year. Bob Young went on to found the online print on demand and self-publishing company, Lulu in 2002. On November 15, 1999, Red Hat acquired Cygnus Solutions. Cygnus provided commercial support for free software and housed maintainers of GNU software products such as the GNU Debugger and GNU Binutils. One of the founders of Cygnus, Michael Tiemann, became the chief technical officer of Red Hat and by 2008 the vice president of open-source affairs.
Red Hat acquired WireSpeed, C2Net and Hell's Kitchen Systems. In February 2000, InfoWorld awarded Red Hat its fourth consecutive "Operating System Product of the Year" award for Red Hat Linux 6.1. Red Hat acquired Planning Technologies, Inc in 2001 and AOL's iPlanet directory and certificate-server software in 2004. Red Hat moved its headquarters from Durham to North Carolina State University's Centennial Campus in Raleigh, North Carolina in February 2002. In the following month Red Hat introduced Red Hat Linux Advanced Server renamed Red Hat Enterprise Linux. Dell, IBM, HP and Oracle Corporation announced their support of the platform. In December 2005, CIO Insight magazine conducted its annual "Vendor Value Survey", in which Red Hat ranked #1 in value for the second year in a row. Red Hat stock became part of the NASDAQ-100 on December 19, 2005. Red Hat acquired open-source middleware provider JBoss on June 5, 2006, JBoss became a division of Red Hat. On September 18, 2006, Red Hat released the Red Hat Application Stack, which integrated the JBoss technology and, certified by other well-known software vendors.
On December 12, 2006, Red Hat stock moved from trading on NASDAQ to the New York Stock Exchange. In 2007 Red Hat made an agreement with Exadel to distribute its software. On March 15, 2007, Red Hat released Red Hat Enterprise Linux 5, in June acquired Mobicents. On March 13, 2008, Red Hat acquired Amentra, a provider of systems integration services for service-oriented architecture, business process management, systems development and enterprise data services. On July 27, 2009, Red Hat replaced CIT Group in Standard and Poor's 500 stock index, a diversified index of 500 leading companies of the U. S. economy. This was reported as a major milestone for Linux. On December 15, 2009, it was reported that Red Hat will pay US$8.8 million to settle a class action lawsuit related to the restatement of financial results from July 2004. The suit had been pending in U. S. District Court for the Eastern District of North Carolina. Red Hat reached the proposed settlement agreement and recorded a one-time charge of US$8.8 million for the quarter that ended Nov. 30.
On January 10, 2011, Red Hat announced that it would expand its headquarters in two phases, adding 540 employees to the Raleigh operation, investing over US$109 million. The state of North Carolina is offering up to US$15 million in incentives; the second phase involves "expansion into new technologies such as software visualization and technology cloud offerings". On August 25, 2011, Red Hat announced it would move about 600 employees from the N. C. State Centennial Campus to Two Progress Plaza downtown. A ribbon cutting ceremony was held June 2013, in the re-branded Red Hat Headquarters. In 2012, Red Hat became the first one-billion dollar open-source company, reaching US$1.13 billion in annual revenue during its fiscal year. Red Hat passed the $2 billion benchmark in 2015; as of February 2018 the company's annual revenue was nearly $3 billion. On October 16, 2015, Red Hat announced its acquisition of IT automation startup Ansible, rumored for an estimated $100 million USD. In May 2018, Red Hat acquired CoreOS.
On October 28, 2018, IBM announced its intent to acquire Red Hat for US$34 billion, in one of its largest-ever acquisitions. The company will operate out of IBM's Hybrid Cloud division. Red Hat's lead advisor was Guggenheim Securities LLC. Red Hat sponsors the Fedora Project, a community-supported free software project that aims to promote the rapid progress of free and open-source software and conten