Arts Entertainments

Expanding Memomics – Extracting the Data Gems from a Babylon of Bejeweled Information

Memomics, when understood as the study of the Meme, when decoding it in an ontological mapping, is a valuable tool to improve semantic webs and search engines. Commercial and advertising applications provided by Artificial Intelligence agents can benefit from the correlations found as explained below:

According to Wikipedia, a Meme is a term that identifies ideas or beliefs that are transmitted from one person or group of people to another. The name comes from an analogy: as genes transmit biological information, it can be said that memes transmit information about ideas and beliefs. The Memome can be seen as the complete collection of all the memes. If we dive a little deeper into this concept, it can also be said that it encompasses all human knowledge.

Genomics and proteomics are the study of the genome, the totality of the hereditary information of organisms and their complete complement of proteins, respectively. Likewise, Memomics can be considered the study of Memome, the complete collection of all Memes.

In Genomics and Proteomics the study involves different types of “mapping” of the functions and structures of genes and proteins. The mapping can be for example or it can be pathological, that is, the correlation between the expression profiles of certain genes and proteins with diseases or it can be topological: expression with respect to a certain type of tissue, type of cell or organ.

Likewise, Memomics studies the ontological mapping of ideas and terms. One company, Alitora Systems, has taken the first steps in the field of Memomics and guess where they started: with life sciences data. They have developed convenient text and data mining tools that can speed up meaningful search and provide links to most ontologically correlated concepts.

A more ambitious project would be to carry out a complete ontological mapping of all human knowledge. That is, for each existing term or concept, with which concepts it is naturally linked. What I mean by this is not just providing a semantic mapping, which provides the meaning of a term in features and other terms. I’d like to expand the assignments as I suggested in my previous post: “Minerva’s Owls Only Fly at Dusk – Patently Intelligent Ontologies.” That is, mapping the proximity relationship of each term defined in a semantic web with each other equally defined term to know the average distance between those terms in all documents of the entire World Wide Web and the weight of the frequency of such occurrences. Such an ontology map could extract terms that have an occurrence correlation well above “noise”. Many trivial terminologies will occur with a high frequency of proximity to any virtual term. This forms a noise frequency level that is a threshold that significant term correlations must exceed. These terminologies include all kinds of syntactic terms such as conjunctions, adverbs, adjectives, modal verbs, etc.

A disadvantage of setting the threshold too high is that terms that are normally trivial, in combination with another term, could have a very specific meaning.

When this ontological mapping is carried out only within specific segmented meaning classes / fields, suddenly important correlations can emerge, which were not visible in most classes and fields.

Therefore, this occurrence-weighted ontological proximity mapping could be carried out in combination with a “website classification” (i-taxonomy).

Vice versa, the occurrence frequency weighted ontological proximity mapping exercise could provide classes and subclasses. Therefore, this process can be implemented iteratively. Meaningful correlation can create classes, which in turn can extract data to find new assignments and suggest new subclasses.

Another ontological mapping is to determine if certain links on the web have a correlation with certain terms.

The implementation must start with all the information present on the web on a fixed date. This information must be somehow stored as frozen to implement the extensive data mining exercise of proximity mapping. Once the given Memome is fully decoded, the process can be repeated iteratively with reloads and will eventually catch up with the “present” at that point.

Artificial intelligence agents will carry out the ontological mapping process and learn from the patterns they recognize, making it easier to map future events and create more classes. Furthermore, the links thus detected and / or generated that are used most frequently can be added to the appropriate hubs in the “Hubbit” system, which I discussed in my previous article: “From search engines to hub generators and centralized personal multipurpose Internet interfaces “. . Well frequented links will be favored and insignificant links will not reach a permanent stage according to the evangelical adage: “To the one who has, it will be given, to the one who does not have, it will be taken away”, which is also a good metaphor of the form in which the neural links are established in our brain.

Undertaking such a large project would require enormous amounts of computing power and memory and, as of yet, may still be beyond what is technically possible. This is the downside. But the computational power and memory of computers has increased exponentially over many decades and there is no reason to believe that the required technology is not at hand.

The applications and business benefits are numerous.

Chatbots and other language systems can be improved by learning from these correlation maps. Search engines can be improved by displaying results in a ranking based on frequency-weighted proximity mapping. At the bottom of a search, you might have suggestions in the form of “people who searched for these terms also searched …”.

Business ontological mappings can be created where the terms are linked to all companies involved in the trade of products related to the term. Like Alitora Systems, it has mapped how certain disease-related genes are connected to companies that develop drugs against these diseases through a mechanism that involves the associated gene, protein or metabolic pathway.

Therefore, the Commerce Memome (Commercome) could also be created as a search database: the complete set of all business relationships, that is, products linked to sellers, buyers, manufacturers, etc. Comics would map relationships ontologically. Once such an information network has been created, it will have become a very useful and easy way to identify your competitors and newcomers in the field (provided the system is kept up to date).

Advertising could benefit greatly from these correlation maps. In analogy with the suggestions in the form of “people who searched for these terms also looked for …”, technology based on ontology mapping could be used in advertising: that is, based on the same principle in analogy with what happens in commercial sites like Amazon.com: “people who bought A, also bought B”, but going a little beyond this principle in an evolutionary and learning algorithm. For example, advertising costs could be linked to the frequency of clicking on the advertisement in question (PPC advertising), while simultaneously the frequency of displaying the advertisement is also linked to it. In this way, again obeying the principle of “to those who have, more will be given, to those who do not have, it will be taken away.” Other business data and text mining mapping could involve mapping the frequency of ad clicks with certain search terms. This could also be coupled with a system that links the cost of advertising to the frequency of clicks and / or the frequency of views. Once again, the artificial intelligence robot that provides these functions will learn from the context and adapt the information display according to the context. Once again, the artificial intelligence robot would generate classes and extract more specific correlations from the generated subclasses.

Queries from the FAQ sheets could be helped by such AIbots, preferably able to converse in natural language like a chatbot. From answers and questions and user satisfaction results, such bots could be programmed to learn and evolve into more efficient information providers.

Thus, Memomics can be expanded to become a valuable engine for mining the data gems of a jeweled information Babylon.