Data General Nova
The Data General Nova is a series of 16-bit minicomputers released by the American company Data General. The first model of Nova, called the "Nova", was released in 1969; the Nova was packaged into a single rack mount case and had enough power to do most simple computing tasks. The Nova became popular in science laboratories around the world, it was succeeded by the Data General Eclipse, similar in most ways but added virtual memory support and other features required by modern operating systems. Edson de Castro was the Product Manager of the pioneering PDP-8 at Digital Equipment Corporation, a 12-bit computer considered by most to be the first true minicomputer. De Castro was convinced that it was possible to improve upon the PDP-8 by building a 16-bit minicomputer with better performance and lower cost than the PDP-8. de Castro left DEC along with another hardware engineer, Richard Sogge, a software engineer, Henry Burkhardt III, to found Data General in 1968. The fourth founder, Herbert Richman, had been a salesman for Fairchild Semiconductor and knew the others through his contacts with Digital Equipment.
In keeping with the PDP-8, the Nova was based on 15 by 15 inches printed circuit boards. The boards were designed so they could be connected together using a printed circuit backplane, with minimal manual wiring, allowing all the boards to be built in an automated fashion; this reduced costs over the traditional wire-wrapping technique. The larger-board construction made the Nova more reliable, which made it attractive for industrial or lab settings. Fairchild Semiconductor provided the new medium-scale integration chips used throughout the system; the Nova was one of the first 16-bit minicomputers and was a leader in moving to word lengths that were multiples of the 8-bit byte in that market. DG released the Nova in 1969 at a base price of US$3,995, advertising it as "the best small computer in the world." The basic model was not useful out of the box, adding RAM in the form of core memory brought the price up to $7,995. Starting in 1969, Data General shipped a total of 50,000 Novas at $8000 each.
The Nova’s biggest competition was from the new DEC PDP-11 computer series, to a lesser extent the older DEC PDP-8 systems. The Nova became popular in scientific and laboratory uses. Subsequent Nova models were: May 1970 - SuperNova. Somewhat faster version of Nova. Dec 1971 - Nova 1200. Faster memory at 1200 nanoseconds Mar 1971 - Nova 800. Faster memory at 800 nanoseconds Apr 1973 - Nova 840. Featured hardware memory mapping Sep 1973 - Nova 2 1975 - Nova 3. Combined features from all previous Nova designs, added useful stack and other instructions; the Nova 1200 executed core memory access instructions in 2.55 microseconds. Use of read-only memory saved 0.4 μs. Accumulator instructions took MUL 2.55 μs, DIV 3.75 μs, ISZ 3.15-4.5 μs. On the Eclipse MV/6000, LDA and STA took 0.44 μs, ADD, etc. took 0.33 μs, MUL 2.2 μs, DIV 3.19 μs, ISZ 1.32 μs, FAD 5.17 μs, FMMD 11.66 μs. Bob Supnik’s SimH project – Includes a basic Nova emulator in a user-modifiable package The portable C compiler includes a NOVA target.
A portable PDP-8 and DG Nova cross-assembler Carl Friend’s Minicomputer Museum – Describes the Nova instruction set in detail
The TX-0, for Transistorized Experimental computer zero, but affectionately referred to as tixo, was an early transistorized computer and contained a then-huge 64K of 18-bit words of magnetic core memory. Construction of the TX-0 began in 1955 and ended in 1956, it was used continually through the 1960s at MIT. The TX-0 incorporated around 3600 Philco high-frequency surface-barrier transistors, the first transistor suitable for high-speed computers; the TX-0 and its direct descendant, the original PDP-1, were platforms for pioneering computer research and the development of what would be called computer "hacker" culture. Designed at the MIT Lincoln Laboratory as an experiment in transistorized design and the construction of large core memory systems, the TX-0 was a transistorized version of the famous Whirlwind built at Lincoln Lab. While the Whirlwind filled an entire floor of a large building, TX-0 fit in a single reasonably sized room and yet was somewhat faster. Like the Whirlwind, the TX-0 was equipped with a vector display system, consisting of a 12" oscilloscope with a working area of 7 by 7 inches connected to the 18-bit output register of the computer, allowing it to display points and vectors with a resolution up to 512×512 screen locations.
The TX-0 was an 18-bit computer with a 16-bit address range. First two bits of machine word designate instruction and remaining 16 bits are used to specify memory location or operand for special "operate" instruction. First two bits could create four possible instructions, which included store and conditional branch instructions as a basic set; the fourth instruction, "operate", took additional operands and allowed access to a number of "micro-orders" which could be used separately or together to provide many other useful instructions. An "add" instruction took 10 microseconds. Wesley A. Clark designed Ken Olsen oversaw the engineering development. With the successful completion of the TX-0, work turned to the much larger and far more complex TX-1; however this project soon ran into difficulties due to its complexity, was redesigned into a smaller form that would be delivered as the TX-2 in 1958. Since core memory was expensive at the time, several parts of the TX-0 memory were cannibalized for the TX-2 project.
After a time, the TX-0 was no longer considered worth keeping at Lincoln Lab, was "loaned" to the MIT Research Laboratory of Electronics in July 1958, where it became a centerpiece of research that would evolve into the MIT Artificial Intelligence Lab and the original computer "hacker" culture. Delivered from Lincoln Laboratory with only 4K of core, the machine no longer needed 16 bits to represent a storage address. After about a year and a half, the number of instruction bits was doubled to four, allowing a total of 16 instructions, an index register was added; this improved programmability of the machine, but still left room for a memory expansion to 8K. This newly modified TX-0 was used to develop a huge number of advances in computing, including speech and handwriting recognition, as well as the tools needed to work on such projects, including text editors and debuggers. Meanwhile the TX-2 project was running into difficulties of its own, several team members decided to leave the project at Lincoln Lab and start their own company - Digital Equipment Corporation.
After a short time selling "lab modules" in the form of simple logic elements from the TX-2 design, the newly formed Digital Equipment Corporation decided to produce a "cleaned up" TX-0 design, delivered it in 1961 as the PDP-1. A year DEC donated the engineering prototype PDP-1 machine to MIT, it was installed in the room next to TX-0, the two machines would run side-by-side for a decade. Significant pieces of the TX-0 are held by MIT Lincoln Laboratory. In 1983, the TX-0 was still running and is shown running a maze application in the first episode of Computer Chronicles. Expensive Desk Calculator Expensive Tape Recorder First video game#Mouse in the Maze, Tic-Tac-Toe — early video games invented on the TX-0 RLE Technical Report 627 TX-0 Computer History Oral history interview with Jack B. Dennis, Charles Babbage Institute, University of Minnesota. — Dennis describes his educational background and work in time-sharing computer systems at the Massachusetts Institute of Technology, including the TX-0 computer, the work of John McCarthy on time-sharing, the development of MULTICS at General Electric.
The TX-0: Its Past and Present TX-0 documentation TX-0 programs Steven Levy, Hackers: Heroes of the Computer Revolution "Welcome". Tixo.org. Archived from the original on 2011-07-22. Retrieved 2011-07-02. — Website dedicated to information about the TX-0 computer. "Archives:The Computer Pioneers: The TX-0 - Engineering and Technology History Wiki". Ethw.org. Development of the TX-0 at MIT - participants include Doug Ross and Harrison Morse, Electronic Systems Group alumni, interview conducted by Gordon Bell. November 13, 1983. Retrieved 2017-12-07
Drum memory was a magnetic data storage device invented by Gustav Tauschek in 1932 in Austria. Drums were used in the 1950s and into the 1960s as computer memory. For many early computers, drum memory formed the main working memory of the computer, it was so common that these computers were referred to as drum machines. Some drum memories were used as secondary storage. Drums were displaced as primary computer memory by magnetic core memory, a better balance of size, cost and potential for further improvements. Drums were replaced by hard disk drives for secondary storage, which were less expensive and denser; the manufacture of drums ceased in the 1970s. A drum memory contained a large metal cylinder, coated on the outside surface with a ferromagnetic recording material, it could be considered the precursor to the hard disk drive, but in the form of a drum rather than a flat disk. In most designs, one or more rows of fixed read-write heads ran along the long axis of the drum, one for each track.
The drum's controller selected the proper head and waited for the data to appear under it as the drum turned. Not all drum units were designed with each track having its own head. Some, such as the English Electric DEUCE drum and the Univac FASTRAND had multiple heads moving a short distance on the drum in contrast to modern HDDs, which have one head per platter surface; the performance of a drum with one head per track is determined entirely by the rotational latency, whereas in an HDD its performance includes a rotational latency delay plus the time to position the head over the desired track. In the era when drums were used as main working memory, programmers did optimum programming—the programmer positioned code on the drum in such a way as to reduce the amount of time needed for the next instruction to rotate into place under the head, they did this by timing how long it would take after loading an instruction for the computer to be ready to read the next one placing that instruction on the drum so that it would arrive under a head just in time.
This method of timing-compensation, called the "skip factor" or "interleaving", was used for many years in storage memory controllers. Tauschek's original drum memory had a capacity of about 500,000 bits. One of the earliest functioning computers to employ drum memory was the Atanasoff–Berry computer, it stored 3000 bits. The outer surface of the drum was lined with electrical contacts leading to capacitors contained within. Magnetic drums were developed for the US Navy during WW II with the work continuing at Engineering Research Associates in 1946 and 1947. An experimental study was completed at ERA and reported to the Navy on June 19, 1947. Other early drum storage device development occurred at Birkbeck College, Harvard University, IBM and the University of Manchester. An ERA drum was the internal memory for the Atlas 1 computer delivered to the US Navy in October 1950. Through mergers ERA became a division of UNIVAC shipping the Series 1100 drum as a part of the UNIVAC File Computer in 1956.
The first mass-produced computer, the IBM 650, had about 8.5 kilobytes of drum memory. As late as 1980, PDP-11/45 machines using magnetic core main memory and drums for swapping were still in use at many of the original UNIX sites. In BSD Unix and its descendants, /dev/drum was the name of the default virtual memory device, deriving from the use of drum secondary-storage devices as backup storage for pages in virtual memory. Drum memory is referenced in The Story of Mel, in which the skilled programmer Mel optimizes programs written for a drum memory computer by taking advantage of the time to process an instruction and the time for the drum to rotate so that the next instruction or data can be read, or optimizing in the opposite direction when the program should wait before proceeding. Magnetic drum memory units were used in the Minuteman ICBM launch control centers from the beginning in the early 1960s until the REACT upgrades in the mid-90's. CAB500 Karlqvist gap Manchester Mark 1 Random-access memory Wisconsin Integrally Synchronized Computer Carousel memory The Story of Mel: the classic story about one programmer's drum machine hand-coding antics: Mel Kaye.
Librascope LGP-30: The drum memory computer referenced in the above story referenced on Librascope LGP-30. Librascope RPC-4000: Another drum memory computer referenced in the above story Oral history interview with Dean Babcock
Mainframe computers or mainframes are computers used by large organizations for critical applications. They are larger and have more processing power than some other classes of computers: minicomputers, servers and personal computers; the term referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. The term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput. Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for long periods of time, with mean time between failures measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are used in applications where downtime would be costly or catastrophic; the term reliability and serviceability is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, Linux. Software upgrades require setting up the operating system or portions thereof, are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface, used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery; when interactive user terminals were introduced, they were used exclusively for applications rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or from personal computers equipped with terminal emulation software.
By the 1980s, many mainframes supported graphic display terminals, terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes or phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces; the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time; this technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.
While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions and virtual machines. Many mainframe customers run two machines: one in their primary data center, one in their backup data center—fully active active, or on standby—in case there is a catastrophe affecting the first building. Test, development and production workload for applications and databases can run on a single machine, except for large demands where the capacity of one machine might be limiting; such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, geographically dispersed storage provided by EMC
Arithmetic logic unit
An arithmetic logic unit is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit. An ALU is a fundamental building block of many types of computing circuits, including the central processing unit of computers, FPUs, graphics processing units. A single CPU, FPU or GPU may contain multiple ALUs; the inputs to an ALU are the data to be operated on, called operands, a code indicating the operation to be performed. In many designs, the ALU has status inputs or outputs, or both, which convey information about a previous operation or the current operation between the ALU and external status registers. An ALU has a variety of input and output nets, which are the electrical conductors used to convey digital signals between the ALU and external circuitry; when an ALU is operating, external circuits apply signals to the ALU inputs and, in response, the ALU produces and conveys signals to external circuitry via its outputs.
A basic ALU has three parallel data buses consisting of a result output. Each data bus is a group of signals; the A, B and Y bus widths are identical and match the native word size of the external circuitry. The opcode input is a parallel bus that conveys to the ALU an operation selection code, an enumerated value that specifies the desired arithmetic or logic operation to be performed by the ALU; the opcode size determines the maximum number of different operations. An ALU opcode is not the same as a machine language opcode, though in some cases it may be directly encoded as a bit field within a machine language opcode; the status outputs are various individual signals that convey supplemental information about the result of the current ALU operation. General-purpose ALUs have status signals such as: Carry-out, which conveys the carry resulting from an addition operation, the borrow resulting from a subtraction operation, or the overflow bit resulting from a binary shift operation. Zero, which indicates all bits of Y are logic zero.
Negative, which indicates the result of an arithmetic operation is negative. Overflow, which indicates the result of an arithmetic operation has exceeded the numeric range of Y. Parity, which indicates whether an or odd number of bits in Y are logic one. At the end of each ALU operation, the status output signals are stored in external registers to make them available for future ALU operations or for controlling conditional branching; the collection of bit registers that store the status outputs are treated as a single, multi-bit register, referred to as the "status register" or "condition code register". The status inputs allow additional information to be made available to the ALU when performing an operation; this is a single "carry-in" bit, the stored carry-out from a previous ALU operation. An ALU is a combinational logic circuit, meaning that its outputs will change asynchronously in response to input changes. In normal operation, stable signals are applied to all of the ALU inputs and, when enough time has passed for the signals to propagate through the ALU circuitry, the result of the ALU operation appears at the ALU outputs.
The external circuitry connected to the ALU is responsible for ensuring the stability of ALU input signals throughout the operation, for allowing sufficient time for the signals to propagate through the ALU before sampling the ALU result. In general, external circuitry controls an ALU by applying signals to its inputs; the external circuitry employs sequential logic to control the ALU operation, paced by a clock signal of a sufficiently low frequency to ensure enough time for the ALU outputs to settle under worst-case conditions. For example, a CPU begins an ALU addition operation by routing operands from their sources to the ALU's operand inputs, while the control unit applies a value to the ALU's opcode input, configuring it to perform addition. At the same time, the CPU routes the ALU result output to a destination register that will receive the sum; the ALU's input signals, which are held stable until the next clock, are allowed to propagate through the ALU and to the destination register while the CPU waits for the next clock.
When the next clock arrives, the destination register stores the ALU result and, since the ALU operation has completed, the ALU inputs may be set up for the next ALU operation. A number of basic arithmetic and bitwise logic functions are supported by ALUs. Basic, general purpose ALUs include these operations in their repertoires: Add: A and B are summed and the sum appears at Y and carry-out. Add with carry: A, B and carry-in are summed and the sum appears at Y and carry-out. Subtract: B is subtracted from A and the difference appears at Y and carry-out. For this function, carry-out is a "borrow" indicator; this operation may be used to compare the magnitudes of A and B. Subtract with borrow: B is subtracted from A with borrow and the difference appears at Y and carry-
The HP 2100 was a series of 16-bit minicomputers produced by Hewlett-Packard from the mid-1960s to early 1990s. Tens of thousands of machines in the series were sold over its twenty-five year lifetime, making HP the fourth largest minicomputer vendor during the 1970s; the design started at Data Systems Inc, was known as the DSI-1000. HP merged it into their Dymec division; the original model, the 2116A built using integrated circuits and core memory, was released in 1966. Over the next four years, several new versions were released including sub-models A through C with different types of memory and expansion, the 2115 and 2114 cost-reduced versions of the 2116. All of these models were replaced by the HP 2100 series in 1971, again as the 21MX series in 1974 when the core was replaced with semiconductor memory. All of these models were packaged as the HP 2000, combining a 2100-series machine with optional components in order to run the BASIC programming language in a multi-user time sharing fashion.
Time-Shared BASIC was popular in the 1970s, many early BASIC programs were written on or for the platform, most notably the seminal Star Trek, popular during the early home computer era. The People's Computer Company published their programs in HP 2000 format; the introduction of the HP 3000 in 1974 provided high-end competition to the 2100 series. A major re-engineering was introduced in 1979 as the 1000 L-Series, using CMOS large scale integration chips and introducing a desk-side tower case model; this was the first version to break backward compatibility with previous 2100-series expansion cards. The final upgrade was the A-series with new processors capable of more than 1 MIPS performance, with the final A990 released in 1990. HP formed Dynac in 1956 to act as a development shop for projects the main company would not take on, their original logo was the HP logo turned upside down, forming something approximating "dy". Learning that Westinghouse owned a trademark on that name, in 1958 they changed it to Dymec.
The company was brought in-house in 1959 to become the Dymec Division, in November 1967 was renamed the Palo Alto Division. Dymec made a variety of products for the HP family, but over time became an integrator, building test equipment and similar systems that were used by HP. In 1964, Kay Magleby and Paul Stoft began experimenting with the use of PDP-5 and PDP-8 computers to act as controllers for their complex test systems. However, they felt the machines would require changes to suit their needs. At the time, Digital Equipment Corporation was still a takeover target. However, Dave Packard found Ken Olsen too difficult to deal with, such plans went nowhere. Looking for another design they could purchase, Packard was led to the five-person Data Systems, Inc. of Detroit. DSI was owned by Union Carbide, when Packard asked how it was that Union Carbide came to own a computer company, HP Labs manager Barney Oliver replied, "We didn't demand an answer to that question." Bill Hewlett refused to consider the development of a "minicomputer", but when Packard reframed it as an "instrument controller" the deal was approved.
DSI was purchased in 1964 and set up at Dymec with four of the original five employees of DSI and a number of other employees coming from HP's instrumentation divisions. The computer group moved to its own offices in Cupertino in a building purchased from Varian Associates, becoming the Cupertino Division. Led by Magleby, the new division completed the design as the 2116A, demonstrated 7–10 November 1966 at the Joint Computer Conference in San Francisco, it was one of the earliest 16-bit minis to hit the market, but is more notable as "an unusual new instrumentation computer" with a expandable design and real-time support. The system featured an oversized cabinet that held up to 16 expansion cards, or could be further expanded to 48 cards with an external expansion cage; the system launched with 20 different instrumentation cards, including "counters, nuclear scalers, electronic thermometers, digital voltmeters, ac/ohms converters, data amplifiers, input scanners." An additional set added input/output devices like tape drives, punched cards and paper tape and other peripherals.
Real-time service was provided by having each card slot be assigned a fixed interrupt vector that called the appropriate device driver. As the machine entered the market, it became clear it was selling much more into the business data processing market than the targeted instrumentation market; this led to the introduction of the 2115A in 1967, which removed much of the expansion capabilities to make a lower-cost offering for commercial users. A further simplified version shipped as the 2114A in 1968, which had only eight slots, leaving room for the power supply to be incorporated into the main chassis; the 2115 and 2114 lacked the extensive DMA control of the 2116, removed some of the mathematical operations, ran at slower speeds. These are the original models using core memory and a hardwired CPU. 2116A, 10 MHz clock, 1.6 microsecond cycle time. Supplied with 4kwords, expandable to 8k internally or 16k with an external memory system. Chassis includes 16 I/O slots expandable. Weight 230 pounds.
Introduced November 1966. It marked HP's first use of integrated circuits. 2116B, supported a new 32k memory expansion option. Weight as above. Introduced September 1968. 2116C, used smaller core so a full 32k could fit in the main chassis. Introduced October 1970. 2115A, short-lived cost-reduced version that removed the
Gottfried Wilhelm Leibniz
Gottfried Wilhelm Leibniz was a prominent German polymath and philosopher in the history of mathematics and the history of philosophy. His most notable accomplishment was conceiving the ideas of differential and integral calculus, independently of Isaac Newton's contemporaneous developments. Mathematical works have always favored Leibniz's notation as the conventional expression of calculus, while Newton's notation became unused, it was only in the 20th century that Leibniz's law of continuity and transcendental law of homogeneity found mathematical implementation. He became one of the most prolific inventors in the field of mechanical calculators. While working on adding automatic multiplication and division to Pascal's calculator, he was the first to describe a pinwheel calculator in 1685 and invented the Leibniz wheel, used in the arithmometer, the first mass-produced mechanical calculator, he refined the binary number system, the foundation of all digital computers. In philosophy, Leibniz is most noted for his optimism, i.e. his conclusion that our universe is, in a restricted sense, the best possible one that God could have created, an idea, lampooned by others such as Voltaire.
Leibniz, along with René Descartes and Baruch Spinoza, was one of the three great 17th-century advocates of rationalism. The work of Leibniz anticipated modern logic and analytic philosophy, but his philosophy looks back to the scholastic tradition, in which conclusions are produced by applying reason to first principles or prior definitions rather than to empirical evidence. Leibniz made major contributions to physics and technology, anticipated notions that surfaced much in philosophy, probability theory, medicine, psychology and computer science, he wrote works on philosophy, law, theology and philology. Leibniz contributed to the field of library science. While serving as overseer of the Wolfenbüttel library in Germany, he devised a cataloging system that would serve as a guide for many of Europe's largest libraries. Leibniz's contributions to this vast array of subjects were scattered in various learned journals, in tens of thousands of letters, in unpublished manuscripts, he wrote in several languages, but in Latin and German.
There is no complete gathering of the writings of Leibniz translated into English. Gottfried Leibniz was born on 1 July 1646, toward the end of the Thirty Years' War, in Leipzig, Saxony, to Friedrich Leibniz and Catharina Schmuck. Friedrich noted in his family journal: 21. Juny am Sontag 1646 Ist mein Sohn Gottfried Wilhelm, post sextam vespertinam 1/4 uff 7 uhr abents zur welt gebohren, im Wassermann. In English: On Sunday 21 June 1646, my son Gottfried Wilhelm is born into the world a quarter before seven in the evening, in Aquarius. Leibniz was baptized on 3 July of that year at Leipzig, his father died when he was six years old, from that point on he was raised by his mother. Leibniz's father had been a Professor of Moral Philosophy at the University of Leipzig, the boy inherited his father's personal library, he was given free access to it from the age of seven. While Leibniz's schoolwork was confined to the study of a small canon of authorities, his father's library enabled him to study a wide variety of advanced philosophical and theological works—ones that he would not have otherwise been able to read until his college years.
Access to his father's library written in Latin led to his proficiency in the Latin language, which he achieved by the age of 12. He composed 300 hexameters of Latin verse, in a single morning, for a special event at school at the age of 13. In April 1661 he enrolled in his father's former university at age 14, completed his bachelor's degree in Philosophy in December 1662, he defended his Disputatio Metaphysica de Principio Individui, which addressed the principle of individuation, on 9 June 1663. Leibniz earned his master's degree in Philosophy on 7 February 1664, he published and defended a dissertation Specimen Quaestionum Philosophicarum ex Jure collectarum, arguing for both a theoretical and a pedagogical relationship between philosophy and law, in December 1664. After one year of legal studies, he was awarded his bachelor's degree in Law on 28 September 1665, his dissertation was titled De conditionibus. In early 1666, at age 19, Leibniz wrote his first book, De Arte Combinatoria, the first part of, his habilitation thesis in Philosophy, which he defended in March 1666.
His next goal was to earn his license and Doctorate in Law, which required three years of study. In 1666, the University of Leipzig turned down Leibniz's doctoral application and refused to grant him a Doctorate in Law, most due to his relative youth. Leibniz subsequently left Leipzig. Leibniz enrolled in the University of Altdorf and submitted a thesis, which he had been working on earlier in Leipzig; the title of his thesis was Disputatio Inauguralis de Casibus Perplexis in Jure. Leibniz earned his license to practice law and his Doctorate in Law in November 1666, he next declined the offer of an academic appointment at Altdorf, saying that "my thoughts were turned in an different direction". As an adult, Leibniz often