Data Management comprises all disciplines related to managing data as a valuable resource. The concept of data management arose in the 1980s as technology moved from sequential processing to random access storage. Since it was now possible to store a discreet fact and access it using random access disk technology, those suggesting that data management was more important than business process management used arguments such as "a customer's home address is stored in 75 places in our computer systems." However, during this period, random access processing was not competitively fast, so those suggesting "process management" was more important than "data management" used batch processing time as their primary argument. As software applications evolved into real-time, interactive usage, it became obvious that both management processes were important. If the data was not well defined, the data would be mis-used in applications. If the process wasn't well defined, it was impossible to meet user needs.
Topics in data management include: In modern management usage, the term data is replaced by information or knowledge in a non-technical context. Thus data management has become knowledge management; this trend obscures the raw data processing and renders interpretation implicit. The distinction between data and derived value is illustrated by the information ladder. However, data has staged a comeback with the popularisation of the term Big data, which refers to the collection and analyses of massive sets of data. Several organisations have established data management centers for their operations. Integrated data management is a tools approach to facilitate data management and improve performance. IDM consists of an integrated, modular environment to manage enterprise application data, optimize data-driven applications over its lifetime. IDM's purpose is to: Produce enterprise-ready applications faster Improve data access, speed iterative testing Empower collaboration between architects, developers and DBAs Consistently achieve service level targets Automate and simplify operations Provide contextual intelligence across the solution stack Support business growth Accommodate new initiatives without expanding infrastructure Simplify application upgrades and retirement Facilitate alignment and governance Define business policies and standards up front.
There are a number of DMFs available. William Richard Evans, of South Africa, has developed three Fully Integrated Data Management Frameworks: The Data Atom Data Management Framework version 1.0 was developed between 2010 and 2014. Version 2.0 was developed between 2014 and 2017. With the advent of artificial intelligence, the Internet of Things and data lakes, version 2.0 was replaced with the more comprehensive Multi Dimensional Data Management Framework V3.0. It covers 7 data environments. On 20 October 2018 He released The Multi Dimensional Data Management Framework V4.0, which includes 8 Data Management Considerations at the core. Four relate to the impact time has on Data and another four provide insight on the current trajectory towards Managed Technological Singularity using Artificial intelligence; the definition provided by DAMA International, the professional organization for the data management profession, is: "Data Management is the development and execution of architectures, policies and procedures that properly manage the full data life-cycle needs of an enterprise."
This broad definition encompasses professions which may not have direct technical contact with lower-level aspects of Data Management, such as relational database management. Alternatively, the definition in the DAMA International Data Management Body of Knowledge is: "Data management is the development and supervision of plans, policies and practices that control, protect and enhance the value of data and information assets." Corporate data quality management is, according to the European Foundation for Quality Management and the Competence Center Corporate Data Quality, the whole set of activities intended to improve corporate data quality. Main premise of CDQM is the business relevance of high-quality corporate data. CDQM comprises with following activity areas:. Strategy for Corporate Data Quality: As CDQM is affected by various business drivers and requires involvement of multiple divisions in an organization. Corporate Data Quality Controlling: Effective CDQM requires compliance with standards and procedures.
Compliance is monitored according to defined metrics and performance indicators and reported to stakeholders. Corporate Data Quality Organization: CDQM requires clear roles and responsibilities for the use of corporate data; the CDQM organization defines tasks and privileges for decision making for CDQM. Corporate Data Quality Processes and Methods: In order to handle corporate data properly and in a standardized way across the entire organization and to ensure corporate data quality, standard procedures and guidelines must be embedded in company’s daily processes. Data Architecture for Corporate Data Quality: The data architecture consists of the data object model - which comprises the unambiguous definition and the conceptual model of
Multimedia is content that uses a combination of different content forms such as text, images, animations and interactive content. Multimedia contrasts with media that use only rudimentary computer displays such as text-only or traditional forms of printed or hand-produced material. Multimedia can be recorded and played, interacted with or accessed by information content processing devices, such as computerized and electronic devices, but can be part of a live performance. Multimedia devices are electronic media devices used to experience multimedia content. Multimedia is distinguished from mixed media in fine art. In the early years of multimedia the term "rich media" was synonymous with interactive multimedia, "hypermedia" was an application of multimedia; the term multimedia was coined by singer and artist Bob Goldstein to promote the July 1966 opening of his "LightWorks at L'Oursin" show at Southampton, Long Island. Goldstein was aware of an American artist named Dick Higgins, who had two years discussed a new approach to art-making he called "intermedia".
On August 10, 1966, Richard Albarino of Variety borrowed the terminology, reporting: "Brainchild of songscribe-comic Bob Goldstein, the'Lightworks' is the latest multi-media music-cum-visuals to debut as discothèque fare." Two years in 1968, the term "multimedia" was re-appropriated to describe the work of a political consultant, David Sawyer, the husband of Iris Sawyer—one of Goldstein's producers at L'Oursin. In the intervening forty years, the word has taken on different meanings. In the late 1970s, the term referred to presentations consisting of multi-projector slide shows timed to an audio track. However, by the 1990s'multimedia' took on its current meaning. In the 1993 first edition of Multimedia: Making It Work, Tay Vaughan declared "Multimedia is any combination of text, graphic art, sound and video, delivered by computer; when you allow the user – the viewer of the project – to control what and when these elements are delivered, it is interactive multimedia. When you provide a structure of linked elements through which the user can navigate, interactive multimedia becomes hypermedia."The German language society Gesellschaft für deutsche Sprache recognized the word's significance and ubiquitousness in the 1990s by awarding it the title of German'Word of the Year' in 1995.
The institute summed up its rationale by stating " has become a central word in the wonderful new media world". In common usage, multimedia refers to an electronically delivered combination of media including video, still images and text in such a way that can be accessed interactively. Much of the content on the web today falls within this definition; some computers which were marketed in the 1990s were called "multimedia" computers because they incorporated a CD-ROM drive, which allowed for the delivery of several hundred megabytes of video and audio data. That era saw a boost in the production of educational multimedia CD-ROMs; the term "video", if not used to describe motion photography, is ambiguous in multimedia terminology. Video is used to describe the file format, delivery format, or presentation format instead of "footage", used to distinguish motion photography from "animation" of rendered motion imagery. Multiple forms of information content are not considered modern forms of presentation such as audio or video.
Single forms of information content with single methods of information processing are called multimedia to distinguish static media from active media. In the fine arts, for example, Leda Luss Luyken's ModulArt brings two key elements of musical composition and film into the world of painting: variation of a theme and movement of and within a picture, making ModulArt an interactive multimedia form of art. Performing arts may be considered multimedia considering that performers and props are multiple forms of both content and media. Multimedia presentations may be viewed by person on stage, transmitted, or played locally with a media player. A broadcast may be a recorded multimedia presentation. Broadcasts and recordings can be digital electronic media technology. Digital online multimedia streamed. Streaming multimedia may be on-demand. Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, or simulator.
The various formats of technological or digital multimedia may be intended to enhance the users' experience, for example to make it easier and faster to convey information. Or in entertainment or art, to transcend everyday experience. Enhanced levels of interactivity are made possible by combining multiple forms of media content. Online multimedia is becoming object-oriented and data-driven, enabling applications with collaborative end-user innovation and personalization on multiple forms of content over time. Examples of these range from multiple forms of content on Web sites like photo galleries with both images and title user-updated, to simulations whose co-efficients, illustrations, animations or videos are modifiable, allowing the multimedia "experience" to be altered without reprogramming. In addition to seeing and hearing, haptic technology enables virtual objects to be felt. Emerging technology involving illusions of taste and smell may enhance the multimedia experience. Multimedia may be broadly divided into linear and non-linear categories: Linea
Association for Computing Machinery
The Association for Computing Machinery is an international learned society for computing. It was founded in 1947, is the world's largest scientific and educational computing society; the ACM is a non-profit professional membership group, with nearly 100,000 members as of 2019. Its headquarters are in New York City; the ACM is an umbrella organization for scholarly interests in computer science. Its motto is "Advancing Computing as a Science & Profession"; the ACM was founded in 1947 under the name Eastern Association for Computing Machinery, changed the following year to the Association for Computing Machinery. ACM is organized into over 171 local chapters and 37 Special Interest Groups, through which it conducts most of its activities. Additionally, there are over 500 university chapters; the first student chapter was founded in 1961 at the University of Louisiana at Lafayette. Many of the SIGs, such as SIGGRAPH, SIGPLAN, SIGCSE and SIGCOMM, sponsor regular conferences, which have become famous as the dominant venue for presenting innovations in certain fields.
The groups publish a large number of specialized journals and newsletters. ACM sponsors other computer science related events such as the worldwide ACM International Collegiate Programming Contest, has sponsored some other events such as the chess match between Garry Kasparov and the IBM Deep Blue computer. ACM publishes over 50 journals including the prestigious Journal of the ACM, two general magazines for computer professionals, Communications of the ACM and Queue. Other publications of the ACM include: ACM XRDS "Crossroads", was redesigned in 2010 and is the most popular student computing magazine in the US. ACM Interactions, an interdisciplinary HCI publication focused on the connections between experiences and technology, the third largest ACM publication. ACM Computing Surveys ACM Computers in Entertainment ACM Special Interest Group: Computers and Society A number of journals, specific to subfields of computer science, titled ACM Transactions; some of the more notable transactions include: ACM Transactions on Computer Systems IEEE/ACM Transactions on Computational Biology and Bioinformatics ACM Transactions on Computational Logic ACM Transactions on Computer-Human Interaction ACM Transactions on Database Systems ACM Transactions on Graphics ACM Transactions on Mathematical Software ACM Transactions on Multimedia Computing and Applications IEEE/ACM Transactions on Networking ACM Transactions on Programming Languages and Systems Although Communications no longer publishes primary research, is not considered a prestigious venue, many of the great debates and results in computing history have been published in its pages.
ACM has made all of its publications available to paid subscribers online at its Digital Library and has a Guide to Computing Literature. Individual members additionally have access to Safari Books Online and Books24x7. ACM offers insurance, online courses, other services to its members. In 1997, ACM Press published Wizards and Their Wonders: Portraits in Computing, written by Christopher Morgan, with new photographs by Louis Fabian Bachrach; the book is a collection of historic and current portrait photographs of figures from the computer industry. The ACM Portal is an online service of the ACM, its core are two main sections: the ACM Guide to Computing Literature. The ACM Digital Library is the full-text collection of all articles published by the ACM in its articles and conference proceedings; the Guide is a bibliography in computing with over one million entries. The ACM Digital Library contains a comprehensive archive starting in the 1950s of the organization's journals, magazines and conference proceedings.
Online services include a forum called Tech News digest. There is an extensive underlying bibliographic database containing key works of all genres from all major publishers of computing literature; this secondary database is a rich discovery service known as The ACM Guide to Computing Literature. ACM adopted a hybrid Open Access publishing model in 2013. Authors who do not choose to pay the OA fee must grant ACM publishing rights by either a copyright transfer agreement or a publishing license agreement. ACM was a "green" publisher. Authors may post documents on their own websites and in their institutional repositories with a link back to the ACM Digital Library's permanently maintained Version of Record. All metadata in the Digital Library is open to the world, including abstracts, linked references and citing works and usage statistics, as well as all functionality and services. Other than the free articles, the full-texts are accessed by subscription. There is a mounting challenge to the ACM's publication practices coming from the open access movement.
Some authors see a centralized peer–review process as less relevant and publish on their home pages or on unreviewed sites like arXiv. Other organizations have sprung up which do their peer review free and online, such as Journal of Artificial Intelligence Research, Journal of Machine Learning Research and the Journal of Research and Practice in Information Technology. In addition to student and regular members, ACM has several advanced membership grades to recognize those with multiple years of membership and "demonstrated performance that sets them apart from their peers"; the number of Fellows, Distinguished Members, Senior Members cannot exceed 1%, 10%, 25% of the total number of professional members, respect
Information management concerns a cycle of organizational activity: the acquisition of information from one or more sources, the custodianship and the distribution of that information to those who need it, its ultimate disposition through archiving or deletion. This cycle of organisational involvement with information involves a variety of stakeholders, including those who are responsible for assuring the quality and utility of acquired information. Stakeholders might have rights to originate, distribute or delete information according to organisational information management policies. Information management embraces all the generic concepts of management, including the planning, structuring, controlling and reporting of information activities, all of, needed in order to meet the needs of those with organisational roles or functions that depend on information; these generic concepts allow the information to be presented to the audience or the correct group of people. After individuals are able to put that information to use, it gains more value.
Information management is related to, overlaps with, the management of data, technology, processes and – where the availability of information is critical to organisational success – strategy. This broad view of the realm of information management contrasts with the earlier, more traditional view, that the life cycle of managing information is an operational matter that requires specific procedures, organisational capabilities and standards that deal with information as a product or a service. In the 1970s, the management of information concerned matters closer to what would now be called data management: punched cards, magnetic tapes and other record-keeping media, involving a life cycle of such formats requiring origination, backup and disposal. At this time the huge potential of information technology began to be recognised: for example a single chip storing a whole book, or electronic mail moving messages around the world, remarkable ideas at the time. With the proliferation of information technology and the extending reach of information systems in the 1980s and 1990s, information management took on a new form.
Progressive businesses such as British Petroleum transformed the vocabulary of what was "IT management", so that “systems analysts” became “business analysts”, “monopoly supply” became a mixture of “insourcing” and “outsourcing”, the large IT function was transformed into “lean teams” that began to allow some agility in the processes that harness information for business benefit. The scope of senior management interest in information at British Petroleum extended from the creation of value through improved business processes, based upon the effective management of information, permitting the implementation of appropriate information systems that were operated on IT infrastructure, outsourced. In this way, information management was no longer a simple job that could be performed by anyone who had nothing else to do, it became strategic and a matter for senior management attention. An understanding of the technologies involved, an ability to manage information systems projects and business change well, a willingness to align technology and business strategies all became necessary.
In the transitional period leading up to the strategic view of information management, Venkatraman argued that: Data, maintained in IT infrastructure has to be interpreted in order to render information. The information in our information systems has to be understood in order to emerge as knowledge. Knowledge allows managers to take effective decisions. Effective decisions have to lead to appropriate actions. Appropriate actions are expected to deliver meaningful results; this is referred to as the DIKAR model: Data, Knowledge and Result, it gives a strong clue as to the layers involved in aligning technology and organisational strategies, it can be seen as a pivotal moment in changing attitudes to information management. The recognition that information management is an investment that must deliver meaningful results is important to all modern organisations that depend on information and good decision-making for their success, it is believed that good information management is crucial to the smooth working of organisations, although there is no accepted theory of information management per se, behavioural and organisational theories help.
Following the behavioural science theory of management developed at Carnegie Mellon University and prominently supported by March and Simon, most of what goes on in modern organizations is information handling and decision making. One crucial factor in information handling and decision making is an individual's ability to process information and to make decisions under limitations that might derive from the context: a person's age, the situational complexity, or a lack of requisite quality in the information, at hand – all of, exacerbated by the rapid advance of technology and the new kinds of system that it enables as the social web emerges as a phenomenon that business cannot ignore, and yet, well before there was any general recognition of the importance of information management in organisations and Simon argued that organizations have to be considered as cooperative systems, with a high level of information processing
Text mining referred to as text data mining equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining involves the process of structuring the input text, deriving patterns within the structured data, evaluation and interpretation of the output.'High quality' in text mining refers to some combination of relevance and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, entity relation modeling. Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis and predictive analytics; the overarching goal is to turn text into data for analysis, via application of natural language processing and analytical methods.
A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted. The term text analytics describes a set of linguistic and machine learning techniques that model and structure the information content of textual sources for business intelligence, exploratory data analysis, research, or investigation; the term is synonymous with text mining. The latter term is now used more in business settings while "text mining" is used in some of the earliest application areas, dating to the 1980s, notably life-sciences research and government intelligence; the term text analytics describes that application of text analytics to respond to business problems, whether independently or in conjunction with query and analysis of fielded, numerical data. It is a truism that 80 percent of business-relevant information originates in unstructured form text; these techniques and processes discover and present knowledge – facts, business rules, relationships –, otherwise locked in textual form, impenetrable to automated processing.
Subtasks—components of a larger text-analytics effort—typically include: Information retrieval or identification of a corpus is a preparatory step: collecting or identifying a set of textual materials, on the Web or held in a file system, database, or content corpus manager, for analysis. Although some text analytics systems apply advanced statistical methods, many others apply more extensive natural language processing, such as part of speech tagging, syntactic parsing, other types of linguistic analysis. Named entity recognition is the use of gazetteers or statistical techniques to identify named text features: people, place names, stock ticker symbols, certain abbreviations, so on. Disambiguation—the use of contextual clues—may be required to decide where, for instance, "Ford" can refer to a former U. S. president, a vehicle manufacturer, a movie star, a river crossing, or some other entity. Recognition of Pattern Identified Entities: Features such as telephone numbers, e-mail addresses, quantities can be discerned via regular expression or other pattern matches.
Document clustering: identification of sets of similar text documents. Coreference: identification of noun phrases and other terms that refer to the same object. Relationship and event Extraction: identification of associations among entities and other information in text Sentiment analysis involves discerning subjective material and extracting various forms of attitudinal information: sentiment, opinion and emotion. Text analytics techniques are helpful in analyzing, sentiment at the entity, concept, or topic level and in distinguishing opinion holder and opinion object. Quantitative text analysis is a set of techniques stemming from the social sciences where either a human judge or a computer extracts semantic or grammatical relationships between words in order to find out the meaning or stylistic patterns of a casual personal text for the purpose of psychological profiling etc. Text mining technology is now broadly applied to a wide variety of government and business needs. All three groups may use text mining for records management and searching documents relevant to their daily activities.
Legal professionals may use text mining for e-discovery. Governments and military groups use text mining for national intelligence purposes. Scientific researchers incorporate text mining approaches into efforts to organize large sets of text data, to determine ideas communicated through text and to support scientific discovery in fields such as the life sciences and bioinformatics. In business, applications are used to support competitive intelligence and automated ad placement, among numerous other activities. Many text mining software packages are marketed for security applications monitoring and analysis of online plain text sources such as Internet news, etc. for national security purposes. It is involved in the study of text encryption/decryption. A range of text mining applications in the biomedical lit
Usability is the ease of use and learnability of a human-made object such as a tool or device. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness and satisfaction in a quantified context of use; the object of use can be a software application, book, machine, vehicle, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, others, it is used in consumer electronics and knowledge transfer objects and mechanical objects such as a door handle or a hammer. Usability includes methods of measuring usability, such as needs analysis and the study of the principles behind an object's perceived efficiency or elegance. In human-computer interaction and computer science, usability studies the elegance and clarity with which the interaction with a computer program or a web site is designed.
Usability considers user satisfaction and utility as quality components, aims to improve user experience through iterative design. The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example: More efficient to use—takes less time to accomplish a particular task Easier to learn—operation can be learned by observing the object More satisfying to useComplex computer systems find their way into everyday life, at the same time the market is saturated with competing brands; this has made usability more popular and recognized in recent years, as companies see the benefits of researching and developing their products with user-oriented methods instead of technology-oriented methods. By understanding and researching the interaction between product and user, the usability expert can provide insight, unattainable by traditional company-oriented market research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated.
A method called contextual inquiry does this in the occurring context of the users own environment. In the user-centered design paradigm, the product is designed with its intended users in mind at all times. In the user-driven or participatory design paradigm, some of the users become actual or de facto members of the design team; the term user friendly is used as a synonym for usable, though it may refer to accessibility. Usability describes the quality of user experience across websites, software and environments. There is no consensus about the relation of the terms usability; some think of usability as the software specialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters and usability focusing on psychological matters. Usability is important in website development. According to Jakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait.
And they don't want to learn. There's a manual for a Web site. People have to be able to grasp the functioning of the site after scanning the home page—for a few seconds at most." Otherwise, most casual users leave the site and browse or shop elsewhere. ISO defines usability as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness and satisfaction in a specified context of use." The word "usability" refers to methods for improving ease-of-use during the design process. Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman have written about a framework of system acceptability, where usability is a part of "usefulness" and is composed of: Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency: Once users have learned the design, how can they perform tasks? Memorability: When users return to the design after a period of not using it, how can they re-establish proficiency?
Errors: How many errors do users make, how severe are these errors, how can they recover from the errors? Satisfaction: How pleasant is it to use the design? Usability is associated with the functionalities of the product, in addition to being a characteristic of the user interface. For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be considered unusable according to the former view, lacking in utility according to the latter view; when evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness and efficiency of the Interface". Each component may be measured subjectively against criteria, e.g. Principles of User Interface Design, to provide a metric expressed as a percentage, it is important to distinguish between usability engineering. Usability testing is the measurement of ease of use of a piece of software. In contrast, usability engineering is the research and design process that ensures a product with good usability.
Usability is a non-functional requirement. As
Information visualization or information visualisation is the study of visual representations of abstract data to reinforce human cognition. The abstract data include both numerical and non-numerical data, such as text and geographic information. However, information visualization differs from scientific visualization: "it’s infovis when the spatial representation is chosen, it’s scivis when the spatial representation is given"; the field of information visualization has emerged "from research in human-computer interaction, computer science, visual design and business methods. It is applied as a critical component in scientific research, digital libraries, data mining, financial data analysis, market studies, manufacturing production control, drug discovery". Information visualization presumes that "visual representations and interaction techniques take advantage of the human eye’s broad bandwidth pathway into the mind to allow users to see and understand large amounts of information at once.
Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways."Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization, data mining, machine learning methods. Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity; the analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is a hypothesis generation scheme, which can be, is followed by more analytical or formal analysis, such as statistical hypothesis testing; the modern study of visualization started with computer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power limited its usefulness.
The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization in Scientific Computing. Since there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH", they have been devoted to the general topics of data visualisation, information visualization and scientific visualisation, more specific areas such as volume visualization. In 1786, William Playfair published the first presentation graphics. Cartogram Cladogram Concept Mapping Dendrogram Information visualization reference model Graph drawing Heatmap HyperbolicTree Multidimensional scaling Parallel coordinates Problem solving environment Treemapping Information visualization insights are being applied in areas such as: Scientific research Digital libraries Data mining Information graphics Financial data analysis Health care Market studies Manufacturing production control Crime mapping eGovernance and Policy Modeling Notable academic and industry laboratories in the field are: Adobe Research IBM Research Google Research Microsoft Research Panopticon Software Scientific Computing and Imaging Institute Tableau Software University of Maryland Human-Computer Interaction Lab VviConferences in this field, ranked by significance in data visualization research, are: IEEE Visualization: An annual international conference on scientific visualization, information visualization, visual analytics.
Conference is held in October. ACM SIGGRAPH: An annual international conference on computer graphics, convened by the ACM SIGGRAPH organization. Conference dates vary. EuroVis: An annual Europe-wide conference on data visualization, organized by the Eurographics Working Group on Data Visualization and supported by the IEEE Visualization and Graphics Technical Committee. Conference is held in June. Conference on Human Factors in Computing Systems: An annual international conference on human-computer interaction, hosted by ACM SIGCHI. Conference is held in April or May. Eurographics: An annual Europe-wide computer graphics conference, held by the European Association for Computer Graphics. Conference is held in April or May. PacificVis: An annual visualization symposium held in the Asia-Pacific region, sponsored by the IEEE Visualization and Graphics Technical Committee. Conference is held in March or April. For further examples, see: Category:Computer graphics organizations Computational visualistics Data art Data Presentation Architecture Data visualization Geovisualization Infographics Patent visualisation Software visualization Visual analytics List of information graphics software List of countries by economic complexity, example of Treemapping Ben Bederson and Ben Shneiderman.
The Craft of Information Visualization: Readings and Reflections. Morgan Kaufmann. Stuart K. Card, Jock D. Mackinlay and Ben Shneiderman. Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann Publishers. Jeffrey Heer, Stuart K. Card, James Landay. "Prefuse: a toolkit for interactive information visualization". In: ACM Human Factors in Computing Systems CHI 2005. Andreas Kerren, John T. Stasko, Jean-Daniel Fekete, Chris North. Information Visualization – Human-Centered Issues and Perspectives. Volume 4950 of LNCS State-of-the-Art Survey, Springer. Riccardo Mazza. Introduction to Information Visualization, Springe