Conversation threading is a feature used by many email clients, bulletin boards and Internet forums in which the software aids the user by visually grouping messages with their replies. These groups are called a conversation, topic thread, or a thread. A discussion forum, e-mail client or news client is said to have a "conversation view", "threaded topics" or a "threaded mode" if messages can be grouped in this manner. Threads can be displayed in a variety of ways. Early messaging systems will automatically include original message text in a reply, making each individual email into its own copy of the entire thread. Software may arrange threads of messages within lists, such as an email inbox; these arrangements can be hierarchical or nested, arranging messages close to their replies in a tree, or they can be linear or flat, displaying all messages in chronological order regardless of reply relationships. Threaded discussions allow the reader to appreciate the overall structure of a conversation.
As such it is most useful in situations with extended conversations or debates involving complex multi-step tasks – as found in newsgroups and complicated email chains – as opposed to simple single-step tasks. Email allows messages to be targeted at particular members of the audience by using the "To" and "CC" lines. However, some message systems do not have this option; as a result, it can be difficult to determine the intended recipient of a particular message. When messages are displayed hierarchically, it is easier to visually identify the author of the previous message, it can be difficult to process, evaluate and integrate important information when viewing large lists of messages. Grouping messages by thread makes the process of reviewing large numbers of messages in context to a given discussion topic more time efficient and with less mental effort, thus making more time and mental resources available to further extend and advance discussions within each individual topic/thread. In group forums, allowing users to reply to threads will reduce the number of new posts shown in the list.
Some clients allow operations on entire threads of messages. For example, the text-based newsreader nn has a "kill" function which automatically deletes incoming messages based on the rules set up by the user matching the message's subject or author; this can reduce the number of messages one has to manually check and delete. Accurate threading of messages requires the software to identify messages that are replies to other messages; some algorithms used for this purpose can be unreliable. For example, email clients that use the subject line to relate messages can be fooled by two unrelated messages that happen to have the same subject line. Modern email clients use unique identifiers in email headers to locate the parent and root message in the hierarchy; when non-compliant clients participate in discussions, they can confuse message threading as it depends on all clients respecting these optional mail standards when composing replies to messages. Messages within a thread do not always provide the user with the same options as individual messages.
For example, it may not be possible to move, reply to, archive, or delete individual messages that are contained within a thread. The lack of individual message control can prevent messaging systems from being used as to-do lists. Individual messages that contain information relevant to a to-do item can get lost in a long thread of messages. In messaging systems that display threads hierarchically, discussions can become fragmented. Unlike systems that display messages linearly, it is much easier to reply to individual messages that are not the most recent message in the thread. Thread fragmentation can be problematic for systems that allow users to choose different display modes. Users of the hierarchical display mode will reply to older messages, confusing users of the linear display mode; the following messaging software can display messages by thread. Apple Mail Emacs Gnus FastMail Forte Agent Gmail Mailbird Microsoft Outlook Thunderbird 4chan FastMail Gmail Hacker News Zulip MSN Groups Reddit Slashdot Yahoo!
Groups Roundcube Document mode, a contrasting method which only displays the result of the last page update. Horton, Sarah. Web teaching guide: A practical approach to creating course web sites. New Haven, CT: Yale University Press. ISBN 978-0300087277. Cited in "Taking discussion online". Dartmouth.edu. 2001. Archived from the original on 4 March 2010. Wolsey, T. DeVere, "Literature discussion in cyberspace: Young adolescents using threaded discussion groups to talk about books. Reading Online, 7, January/February 2004. Retrieved 2007-12-30. Network Working Group,IETF. "Internet Message Access Protocol - SORT and THREAD Extensions". Retrieved 2009-10-10
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is
ArXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, astronomy, electrical engineering, computer science, quantitative biology, mathematical finance and economics, which can be accessed online. In many fields of mathematics and physics all scientific papers are self-archived on the arXiv repository. Begun on August 14, 1991, arXiv.org passed the half-million-article milestone on October 3, 2008, had hit a million by the end of 2014. By October 2016 the submission rate had grown to more than 10,000 per month. ArXiv was made possible by the compact TeX file format, which allowed scientific papers to be transmitted over the Internet and rendered client-side. Around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Paul Ginsparg recognized the need for central storage, in August 1991 he created a central repository mailbox stored at the Los Alamos National Laboratory which could be accessed from any computer.
Additional modes of access were soon added: FTP in 1991, Gopher in 1992, the World Wide Web in 1993. The term e-print was adopted to describe the articles, it began as a physics archive, called the LANL preprint archive, but soon expanded to include astronomy, computer science, quantitative biology and, most statistics. Its original domain name was xxx.lanl.gov. Due to LANL's lack of interest in the expanding technology, in 2001 Ginsparg changed institutions to Cornell University and changed the name of the repository to arXiv.org. It is now hosted principally with eight mirrors around the world, its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv; the annual budget for arXiv is $826,000 for 2013 to 2017, funded jointly by Cornell University Library, the Simons Foundation and annual fee income from member institutions.
This model arose in 2010, when Cornell sought to broaden the financial funding of the project by asking institutions to make annual voluntary contributions based on the amount of download usage by each institution. Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,000 to $4,400. Cornell's goal is to raise at least $504,000 per year through membership fees generated by 220 institutions. In September 2011, Cornell University Library took overall administrative and financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a life sentence". However, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. Although arXiv is not peer reviewed, a collection of moderators for each area review the submissions; the lists of moderators for many sections of arXiv are publicly available, but moderators for most of the physics sections remain unlisted.
Additionally, an "endorsement" system was introduced in 2004 as part of an effort to ensure content is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, but to check whether the paper is appropriate for the intended subject area. New authors from recognized academic institutions receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for restricting scientific inquiry. A majority of the e-prints are submitted to journals for publication, but some work, including some influential papers, remain purely as e-prints and are never published in a peer-reviewed journal. A well-known example of the latter is an outline of a proof of Thurston's geometrization conjecture, including the Poincaré conjecture as a particular case, uploaded by Grigori Perelman in November 2002.
Perelman appears content to forgo the traditional peer-reviewed journal process, stating: "If anybody is interested in my way of solving the problem, it's all there – let them go and read about it". Despite this non-traditional method of publication, other mathematicians recognized this work by offering the Fields Medal and Clay Mathematics Millennium Prizes to Perelman, both of which he refused. Papers can be submitted in any of several formats, including LaTeX, PDF printed from a word processor other than TeX or LaTeX; the submission is rejected by the arXiv software if generating the final PDF file fails, if any image file is too large, or if the total size of the submission is too large. ArXiv now allows one to store and modify an incomplete submission, only finalize the submission when ready; the time stamp on the article is set. The standard access route is through one of several mirrors. Sev
A supercomputer is a computer with a high level of performance compared to a general-purpose computer. The performance of a supercomputer is measured in floating-point operations per second instead of million instructions per second. Since 2017, there are supercomputers which can perform up to nearly a hundred quadrillion FLOPS. Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union and Japan to build faster, more powerful and more technologically superior exascale supercomputers. Supercomputers play an important role in the field of computational science, are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research and gas exploration, molecular modeling, physical simulations. Throughout their history, they have been essential in the field of cryptanalysis. Supercomputers were introduced in the 1960s, for several decades the fastest were made by Seymour Cray at Control Data Corporation, Cray Research and subsequent companies bearing his name or monogram.
The first such machines were tuned conventional designs that ran faster than their more general-purpose contemporaries. Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical. From the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm; the US has long been the leader in the supercomputer field, first through Cray's uninterrupted dominance of the field, through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, but since China has become active in the field; as of November 2018, the fastest supercomputer on the TOP500 supercomputer list is the Summit, in the United States, with a LINPACK benchmark score of 143.5 PFLOPS, followed by, Sierra, by around 48.860 PFLOPS.
The US has five of the top 10 and China has two. In June 2018, all supercomputers on the list combined have broken the 1 exabyte mark. In 1960 Sperry Rand built the Livermore Atomic Research Computer, today considered among the first supercomputers, for the US Navy Research and Development Centre, it still used high-speed drum memory, rather than the newly emerging disk drive technology. Among the first supercomputers was the IBM 7030 Stretch; the IBM 7030 was built by IBM for the Los Alamos National Laboratory, which in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives; the IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France bought the computer and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.
The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words; the Atlas operating system swapped data in the form of pages between the drum. The Atlas operating system introduced time-sharing to supercomputing, so that more than one programe could be executed on the supercomputer at any one time. Atlas was a joint venture between Ferranti and the Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second; the CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run faster and the overheating problem was solved by introducing refrigeration to the supercomputer design.
Thus the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each. Cray left CDC in 1972 to form Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history; the Cray-2 was released in 1985. It had eight central processing units, liquid cooling and the electronics coolant liquid fluorinert was pumped through the supercomputer architecture, it was the world's second fastest after M-13 supercomputer in Moscow. The only computer to challenge the Cray-1's performance in the 1970s was the ILLIAC IV; this machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as as possible, i
An Internet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. They differ from chat rooms in that messages are longer than one line of text, are at least temporarily archived. Depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes publicly visible. Forums have a specific set of jargon associated with them. A discussion forum is hierarchical or tree-like in structure: a forum can contain a number of subforums, each of which may have several topics. Within a forum's topic, each new discussion started is called a thread and can be replied to by as many people as so wish. Depending on the forum's settings, users can be anonymous or have to register with the forum and subsequently log in to post messages. On most forums, users do not have to log in to read existing messages; the modern forum originated from bulletin boards, so-called computer conferencing systems, are a technological evolution of the dialup bulletin board system.
From a technological standpoint, forums or boards are web applications managing user-generated content. Early Internet forums could be described as a web version of an electronic mailing list or newsgroup. Developments emulated the different newsgroups or individual lists, providing more than one forum, dedicated to a particular topic. Internet forums are prevalent in several developed countries. Japan posts the most with over two million per day on 2channel. China has many millions of posts on forums such as Tianya Club; some of the first forum systems were the Planet-Forum system, developed at the beginning of the 1970-s, the EIES system, first operational in 1976, the KOM system, first operational in 1977. One of the first forum sites is Delphi Forums, once called Delphi; the service, with four million members, dates to 1983. Forums perform a function similar to that of dial-up bulletin board systems and Usenet networks that were first created starting in the late 1970s. Early web-based forums date back as far as 1994, with the WIT project from W3 Consortium and starting from this time, many alternatives were created.
A sense of virtual community develops around forums that have regular users. Technology, video games, music, fashion and politics are popular areas for forum themes, but there are forums for a huge number of topics. Internet slang and image macros popular across the Internet are abundant and used in Internet forums. Forum software packages are available on the Internet and are written in a variety of programming languages, such as PHP, Java and ASP; the configuration and records of posts can be stored in a database. Each package offers different features, from the most basic, providing text-only postings, to more advanced packages, offering multimedia support and formatting code. Many packages can be integrated into an existing website to allow visitors to post comments on articles. Several other web applications, such as blog software incorporate forum features. WordPress comments at the bottom of a blog post allow for a single-threaded discussion of any given blog post. Slashcode, on the other hand, is far more complicated, allowing threaded discussions and incorporating a robust moderation and meta-moderation system as well as many of the profile features available to forum users.
Some stand alone threads on forums have reached fame and notability such as the "I am lonely will anyone speak to me" thread on MovieCodec.com's forums, described as the "web's top hangout for lonely folk" by Wired Magazine. A forum consists of a tree-like directory structure; the top end is "Categories". A forum can be divided into categories for the relevant discussions. Under the categories are sub-forums and these sub-forums can further have more sub-forums; the topics come under the lowest level of sub-forums and these are the places under which members can start their discussions or posts. Logically forums are organized into a finite set of generic topics driven and updated by a group known as members, governed by a group known as moderators, it can have a graph structure. All message boards will use one of three possible display formats; each of the three basic message board display formats: Non-Threaded/Semi-Threaded/Fully Threaded, has its own advantages and disadvantages. If messages are not related to one another at all, a Non-Threaded format is best.
If a user has a message topic and multiple replies to that message topic, a semi-threaded format is best. If a user has a message topic and replies to that message topic and responds to replies a threaded format is best. Internally, Western-style forums logged in members into user groups. Privileges and rights are given based on these groups. A user of the forum can automatically be promoted to a more privileged user group based on criteria set by the administrator. A person viewing a closed thread as a member will see a box saying he does not have the right to submit messages there, but a moderator will see the same box granting him access to more than just posting messages. An unregistered user of the site is known as a guest or visitor. Guests are granted access to all functions that do not require database alterations or breach privacy. A guest can view the contents of the forum or use such features as read marking, but an administrator will disallow visi
A personal computer is a multi-purpose computer whose size and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large costly minicomputer and mainframes, time-sharing by many people at the same time is not used with personal computers. Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications these systems run commercial software, free-of-charge software or free and open-source software, provided in ready-to-run form. Software for personal computers is developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible; this contrasts with mobile systems, where software is only available through a manufacturer-supported channel, end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry; these include free and open-source Unix-like operating systems such as Linux. Advanced Micro Devices provides the main alternative to Intel's processors; the advent of personal computers and the concurrent Digital Revolution have affected the lives of people in all countries. "PC" is an initialism for "personal computer". The IBM Personal Computer incorporated the designation in its model name, it is sometimes useful to distinguish personal computers of the "IBM Personal Computer" family from personal computers made by other manufacturers. For example, "PC" is used in contrast with "Mac", an Apple Macintosh computer.. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" computers.
The "brain" may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far. In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit trained, person; this mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30of 1956, the Programma 101 introduced in 1964, the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person.
In what was to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of what would become the staples of daily working life in the 21st century: e-mail, word processing, video conferencing, the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time; the development of the microprocessor, with widespread commercial availability starting in the mid 1970's, made computers cheap enough for small businesses and individuals to own. Early personal computers—generally called microcomputers—were sold in a kit form and in limited volumes, were of interest to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008.
It was built starting in 1972, few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use; the CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants. In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP based on the IBM PALM processor with a Philips compact cassette drive, small CRT, full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was available only on mainframe computers, most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC; because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.
C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weigh
Molecular mechanics uses classical mechanics to model molecular systems. The Born–Oppenheimer approximation is assumed valid and the potential energy of all systems is calculated as a function of the nuclear coordinates using force fields. Molecular mechanics can be used to study molecule systems ranging in size and complexity from small to large biological systems or material assemblies with many thousands to millions of atoms. All-atomistic molecular mechanics methods have the following properties: Each atom is simulated as one particle Each particle is assigned a radius, a constant net charge Bonded interactions are treated as springs with an equilibrium distance equal to the experimental or calculated bond lengthVariants on this theme are possible. For example, many simulations have used a united-atom representation in which each terminal methyl group or intermediate methylene unit was considered one particle, large protein systems are simulated using a bead model that assigns two to four particles per amino acid.
The following functional abstraction, termed an interatomic potential function or force field in chemistry, calculates the molecular system's potential energy in a given conformation as a sum of individual energy terms. E = E covalent + E noncovalent where the components of the covalent and noncovalent contributions are given by the following summations: E covalent = E bond + E angle + E dihedral E noncovalent = E electrostatic + E van der Waals The exact functional form of the potential function, or force field, depends on the particular simulation program being used; the bond and angle terms are modeled as harmonic potentials centered around equilibrium bond-length values derived from experiment or theoretical calculations of electronic structure performed with software which does ab-initio type calculations such as Gaussian. For accurate reproduction of vibrational spectra, the Morse potential can be used instead, at computational cost; the dihedral or torsional terms have multiple minima and thus cannot be modeled as harmonic oscillators, though their specific functional form varies with the implementation.
This class of terms may include improper dihedral terms, which function as correction factors for out-of-plane deviations. The non-bonded terms are much more computationally costly to calculate in full, since a typical atom is bonded to only a few of its neighbors, but interacts with every other atom in the molecule; the van der Waals term falls off rapidly. It is modeled using a 6–12 Lennard-Jones potential, which means that attractive forces fall off with distance as r−6 and repulsive forces as r−12, where r represents the distance between two atoms; the repulsive part r − 12 is however unphysical. Description of van der Waals forces by the Lennard-Jones 6–12 potential introduces inaccuracies, which become significant at short distances. A cutoff radius is used to speed up the calculation so that atom pairs which distances are greater than the cutoff have a van der Waals interaction energy of zero; the electrostatic terms are notoriously difficult to calculate well because they do not fall off with distance, long-range electrostatic interactions are important features of the system under study.
The basic functional form is the Coulomb potential, which only falls off as r−1. A variety of methods are used to address this problem, the simplest being a cutoff radius similar to that used for the van der Waals terms. However, this introduces atoms outside the radius. Switching or scaling functions that modulate the apparent electrostatic energy are somewhat more accurate methods that multiply the calculated energy by a smoothly varying scaling factor from 0 to 1 at the outer and inner cutoff radii. Other more sophisticated but computationally intensive methods are particle mesh Ewald and the multipole algorithm. In addition to the functional form of each energy term, a useful energy function must be assigned parameters for force constants, van der Waals multipliers, other constant terms; these terms, together with the equilibrium bond and dihedral values, partial charge values, atomic masses and radii, energy function definitions, are collectively termed a force field. Parameterization is done through agreement with experimental values and theoretical calculations results.
Allingers force field in the last MM4 version calculate for hydrocarbons heats of formation with a rms error of 0.35 kcal/mol. The main use of molecular mechanics is in the field of molecular dynamics