Avogadro is a molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, materials science, related areas. It is extensible via a plugin architecture. Molecule builder-editor for Windows, Linux and macOS. All source code is licensed under the GNU General Public License version 2. Supported languages include: Chinese, French, Italian, Russian and Polish. Supports multi-threaded computation. Plugin architecture for developers, including rendering, interactive tools and Python scripts. OpenBabel import of files, input generation for multiple computational chemistry packages, X-ray crystallography, biomolecules. Official website
Open data is the idea that some data should be available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control. The goals of the open-source data movement are similar to those of other "open" movements such as open-source software, open content, open education, open educational resources, open government, open knowledge, open access, open science, the open web. Paradoxically, the growth of the open data movement is paralleled by a rise in intellectual property rights; the philosophy behind open data has been long established, but the term "open data" itself is recent, gaining popularity with the rise of the Internet and World Wide Web and with the launch of open-data government initiatives such as Data.gov, Data.gov.uk and Data.gov.in. Open data, can be linked data. One of the most important forms of open data is open government data, a form of open data created by ruling government institutions. Open government data's importance is borne from it being a part of citizens' everyday lives, down to the most routine/mundane tasks that are far removed from government.
The concept of open data is not new. One definition is the Open Definition which can be summarized in the statement that "A piece of data is open if anyone is free to use and redistribute it – subject only, at most, to the requirement to attribute and/or share-alike." Other definitions, including the Open Data Institute's "Open data is data that anyone can access, use or share", have an accessible short version of the definition but refer to the formal definition. Open data may include non-textual material such as maps, connectomes, chemical compounds and scientific formulae, medical data and practice and biodiversity. Problems arise because these are commercially valuable or can be aggregated into works of value. Access to, or re-use of, the data is controlled by organisations, both private. Control may be through access restrictions, copyright and charges for access or re-use. Advocates of open data argue that these restrictions are against the common good and that these data should be made available without restriction or fee.
In addition, it is important that the data are re-usable without requiring further permission, though the types of re-use may be controlled by a license. A typical depiction of the need for open data: Numerous scientists have pointed out the irony that right at the historical moment when we have the technologies to permit worldwide availability and distributed process of scientific data, broadening collaboration and accelerating the pace and depth of discovery... we are busy locking up that data and preventing the use of correspondingly advanced technologies on knowledge. Creators of data do not consider the need to state the conditions of ownership, licensing and re-use. For example, many scientists do not regard the published data arising from their work to be theirs to control and consider the act of publication in a journal to be an implicit release of data into the commons. However, the lack of a license makes it difficult to determine the status of a data set and may restrict the use of data offered in an "Open" spirit.
Because of this uncertainty it is possible for public or private organizations to aggregate said data, protect it with copyright and resell it. The issue of indigenous knowledge poses a great challenge in terms of capturing and distribution. Many societies in third-world countries lack the technicality processes of managing the IK. At his presentation at the XML 2005 conference, Connolly displayed these two quotations regarding open data: "I want my data back." "I've long believed that customers of any application own the data they enter into it." Open data can come from any source. This section lists some of the fields; the concept of open access to scientific data was institutionally established with the formation of the World Data Center system, in preparation for the International Geophysical Year of 1957–1958. The International Council of Scientific Unions oversees several World Data Centres with the mandate to minimize the risk of data loss and to maximize data accessibility. While the open-science-data movement long predates the Internet, the availability of fast, ubiquitous networking has changed the context of Open science data, since publishing or obtaining data has become much less expensive and time-consuming.
The Human Genome Project was a major initiative. It was built upon the so-called Bermuda Principles, stipulating that: "All human genomic sequence information should be available and in the public domain in order to encourage research and development and to maximise its benefit to society'. More recent initiatives such as the Structural Genomics Consortium have illustrated that the open data approach can be used productively within the context of industrial R&D. In 2004, the Science Ministers of all nations of the Organisation for Economic Co-operation and Development, which includes most developed countries of the world, signed a declaration which states that all publicly funded archive data should be made publicly available. Following a request and an intense discussion with data-pr
Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids, it is necessary because, apart from recent results concerning the hydrogen molecular ion, the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena, it is used in the design of new drugs and materials. Examples of such properties are structure and relative energies, electronic charge density distributions and higher multipole moments, vibrational frequencies, reactivity, or other spectroscopic quantities, cross sections for collision with other particles; the methods used cover both dynamic situations. In all cases, the computer time and other resources increase with the size of the system being studied.
That system can be a group of molecules, or a solid. Computational chemistry methods range from approximate to accurate. Ab initio methods are based on quantum mechanics and basic physical constants. Other methods are called empirical or semi-empirical because they use additional empirical parameters. Both ab initio and semi-empirical approaches involve approximations; these range from simplified forms of the first-principles equations that are easier or faster to solve, to approximations limiting the size of the system, to fundamental approximations to the underlying equations that are required to achieve any solution to them at all. For example, most ab initio calculations make the Born–Oppenheimer approximation, which simplifies the underlying Schrödinger equation by assuming that the nuclei remain in place during the calculation. In principle, ab initio methods converge to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, residual error remains.
The goal of computational chemistry is to minimize this residual error while keeping the calculations tractable. In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules; this is the case in conformational studies of protein-ligand binding thermodynamics. Classical approximations to the potential energy surface are used, as they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics. Furthermore, cheminformatics uses more empirical methods like machine learning based on physicochemical properties. One typical problem in cheminformatics is to predict the binding affinity of drug molecules to a given target. Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927; the books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow.
With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One major advance came with the 1951 paper in Reviews of Modern Physics by Clemens C. J. Roothaan in 1951 on the "LCAO MO" approach, for many years the second-most cited paper in that journal. A detailed account of such use in the United Kingdom is given by Smith and Sutcliffe; the first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet in 1960; the first polyatomic calculations using Gaussian orbitals were performed in the late 1950s.
The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer. In 1964, Hückel method calculations of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford; these empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO. In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian
Abalone (molecular mechanics)
Abalone is a general purpose molecular dynamics and molecular graphics program for simulations of bio-molecules in a periodic boundary conditions in explicit or in implicit water models. Designed to simulate the protein folding and DNA-ligand complexes in AMBER force field. 3D molecular graphics Automatic Force Field generator for bioelements: H, C, N, O] Building and editing chemical structures Library of building blocks Force fields: Assisted Model Building with Energy Refinement 94, 96, 99SB, 03.
Henry Stephen Rzepa is a chemist and Emeritus Professor of Computational chemistry at Imperial College London. Rzepa was born in London in 1950, was educated at Wandsworth Comprehensive School, entered the chemistry department at Imperial College London where he graduated in 1971, he stayed to do a Ph. D. on the physical organic chemistry of indoles supervised by Brian Challis. After spending three years doing postdoctoral research at the University of Texas at Austin, Texas with Michael Dewar in the emerging field of computational chemistry, he returned to Imperial College after being appointed a lecturer, he was one of the first to be appointed in the UK in the emerging subject of computational organic chemistry. As of 2017 he is Emeritus Professor of Computational Chemistry, his research interests directed towards combining different types of chemical information tools for solving structural and stereochemical problems in organic, organometallic chemistry and catalysis, using techniques such as semiempirical molecular orbital methods, Nuclear Magnetic Resonance spectroscopy, X-ray crystallography and ab initio quantum theories.
Aware of the complex semantic issues involved in converging different areas of chemistry to address modern multidisciplinary problems, he started investigating the use of the Internet as an information and integrating medium around 1987, focusing in 1994 on the World Wide Web as having the most potential. Peter Murray-Rust and he first introduced Chemical Markup Language in 1995 as a rich carrier of semantic chemical information and data, his contributions to chemistry include exploration of Möbius aromaticity, highlighted by the theoretical discovery of stable forms of cyclic conjugated molecules which exhibit two and higher half-twists in the topology rather than just the single twist associated with Mobius systems. He is responsible for unraveling the mechanistic origins of stereocontrol in a variety of catalytic polymerisation reactions, including that of lactide to polylactide, a new generation of bio-sustainable polymer not dependent on oil, he is known for the integration of chemistry with latest Internet technologies such as RSS and Podcasting, for the introduction of the Chemical MIME types in 1994 and for, the first electronic-only conferences in organic chemistry, which ran from 1995-1998.
Rzepa was awarded the Herman Skolnik Award in 2012 by the American Chemical Society
Christoph Steinbeck is a chemist born in Neuwied in 1966 and has a professorship for analytical chemistry and chemometrics at the Friedrich-Schiller-Universität Jena in Thuringia, Germany. Steinbeck received his PhD from the University of Bonn in 1995 for work on LUCY, a software program for structural elucidation from nuclear magnetic resonance correlation experiments. In 2003 he received his habilitation. Steinbeck's research interests have involved the elucidation of chemical structures of metabolites, he was one of the first chemists to develop open source tools for cheminformatics. He initiated JChemPaint, was founder of the Chemistry Development Kit, is responsible for leading the team working on Chemical Entities of Biological Interest, he headed the Cheminformatics and Metabolomics group at the European Molecular Biology Laboratory-European Bioinformatics Institute in Cambridge, United Kingdom from 2008 to 2016. He became a professor for analytical chemistry and chemometrics at the Friedrich-Schiller-Universität Jena in Thuringia, Germany in March 2017.
Together with a few other chemists he was a founder member of the Blue Obelisk movement in 2005. Steinbeck is editor-in-chief of the Journal of Cheminformatics, director of the Metabolomics Society, past chair of the Computers-Information-Chemistry division of the German Chemical Society, past trustee of the Chemical Structure Association Trust, a lifetime member of the World Association of Theoretically Oriented Chemists. Www.steinbeck-molecular.de www.ebi.ac.uk/steinbeck
Jean-Claude Bradley was a chemist who promoted Open Science in chemistry, including at the White House, for which he was awarded the Blue Obelisk award in 2007. He coined the term "Open Notebook science", he died in May 2014. A memorial symposium was held July 14, 2014 at Cambridge University, UK. One outcome of his Open Notebook work is the collection of physicochemical properties of organic compounds he was studying. All of this data he made available as Open data under the CCZero license. For example, in 2009 Bradley et al. published their work on making solubility data of organic compounds available as Open data. The melting point data set he collaborated on with Andrew Lang and Antony Williams was published with Figshare. Both data sets were made available as books via the Lulu.com self-publishing platform. He contributed to at least 25 individual blogs. In an interview in 2008 with Bora Zivkovic titled "Doing Science Publicly", he spoke of his work and online presence. In 2010, he gave an extensive interview about the impact of Open Notebook science with Richard Poynder.
Jean-Claude Bradley's Google Scholar Citations Page Jean-Claude Bradley's YouTube Channel Jean-Claude Bradley's FriendFeed entries In Memoriam JCB Memorial wiki