Pat Inc. is the world leader in understanding meaning in language. If machines can understand us, it will be much more effective to communicate with them.


Humanize conversation with machines.

How does it work?

We parse to meaning - Pat breaks down language by the only commonality between languages - meaning. Our system has 3 parts:

  • Language layer
  • Meaning layer and
  • Context layer.

By focusing on meaning, we have a relatively small number of ‘links’ to build in our meaning layer or neural network. This ensures scalability of the system and completely avoids the ‘combinatorial explosion’ that is a current challenge for computers trying to manipulate the infinite permutations of sentence structures, across multiple languages. Speech recognition is commoditized, therefore our customers can use their own speech recognition. Pat receives text as input and adds meaning as output.

How does Pat scale the Meaning Layer?

We build the neural network or meaning layer once, increasing the knowledge of Pat from a 1 year old in conversational ability to a 10 year old. By matching to the meaning layer, this reduces potentially 10 to the power of 30 permutations of a sentence to fewer than 10 patterns. That’s why Pat is scalable and other attempts at NLU get stuck.

Brief history

Founder and CTO John Ball worked on neural networks with luminaries like Marvin Minsky, the father of A.I. at MIT. Browsing in a bookstore at Princeton, John stumbled upon a book that showed how to match text straight to meaning. Professor Van Valin, the man behind that book on RRG, is now our CSO. Still, no one thought that matching meaning was scalable on machines. Today, Pat’s team is focused on meaning: an area overlooked by today’s AI companies due to its complexity. A few patents later, Pat opens up a multi-billion dollar blue ocean to complement current A.I. solutions which will never beat a 3-year old in NLU. For example, Pat could facilitate A.I. plays like Siri and Watson to have a meaningful conversation.

About Pat Inc.

Founded in 2015, Pat Inc. is building the leading ‘meaning as a service provider’ for all A.I. based applications in the market. Focusing on its bold vision to humanize conversation with machines, Pat has made the biggest breakthrough in Natural Language Understanding (NLU) by integrating RRG with a patented neural network. Parsing straight to meaning results in the accurate intent in human language. In 2016, the company launched an API in private beta that adds meaning to A.I. products in multiple languages. Headquartered in Palo Alto, California with R&D facilities in Sydney, Australia, Pat is privately owned. For more information about Pat, the meaning matcher, visit www.pat.ai.

What does Pat do?

Pat humanizes conversation with machines because Pat understands the meaning of text.


Currently, computers do not understand the meaning and intent of human language.

Human conversation ≠ big data problem ≠ probabilistic. We humanize conversation with machines by parsing text straight to meaning.

Who would be the main customers?

Companies and developers that want to create interactive text or voice between their customers and a service can now do that with accurate language understanding. Digital assistant technology will be enhanced because now we can understand meaning. For example if a company wants to deliver a customer service chat bot or interactive support, they will be able to do that with human-like understanding.

Why is this launch significant for the market?

First time a company has made a breakthrough in NLU which is a paradigm shift in the AI industry.

When was the company founded?

Founded January 1 in 2015 in Palo Alto.

Where are you based?

HQ in Palo Alto, R&D in Sydney, Australia

Who are your founders/executives?

  • Wibe Wagemans CEO
  • John Ball CTO & Founder
  • Beth Carey COO & Co-Founder
  • Professor Robert Van Valin CSO

Scientific Advisory Board

  • Professor Robert Van Valin, Jr., Department of Linguistics, University Düsseldorf and University at Buffalo, The State University of New York, PhD UC Berkeley.
  • Professor William Foley, Department of Linguistics, the University of Sydney, PhD UC Berkeley.
  • Professor Daniel Everett, Dean of Arts and Sciences, Bentley University in Waltham MA, PhD U of Campinas.
  • Professor Avideh Zakhor, Department of Computing Sciences, UC Berkeley, PhD MIT.
  • Dr. Hossein Eslambolchi, former CTO AT&T; 1,000 patents.
  • Professor James Pustejovsky, Department of Computer Science at Brandeis University, PhD MIT.

Why can you do this and others cannot?

There are open scientific problems in natural language understanding, as identified by Google, Facebook and others recently including ambiguity and acquisition of real world knowledge. The industry has not been able to solve them because they are focused on statistics and brute force of computing power to solve the language dilemma. Recently, Machine Learning and deep neural networks have been making progress in vision recognition and learning patterns of word co-locations for various languages but they can not represent meaning. According to them, NLU is still fundamentally unsolved.

How does RRG compare with other linguistic models?

RRG parses straight to meaning unlike LFG, CCG, and Chomskyan linguistic models.

Why did AI turn to statistics for instead of linguistics?

In the 1960s and 1970s, NLU stopped advancing because none of Chomsky's rules have been computationally implementable: this is why the industry went to statistics. Frederick Jelinek at IBM famously said: "Every time I fire a linguist, the performance of the [A.I.] goes up". ‘Good enough’ applications have been built such as Google Translate and digital assistants such Siri, Cortana, Alexa etc. but they are limited in the accuracy they can achieve without NLU.

What is NLU?

Machine reading comprehension.

Can your solution support several languages? Can we intermix several languages in a same sentence?

Yes. At the core our system is language independent, being based on meaning (the RRG linking algorithm converts language to meaning). In written form, it currently handles multiple languages in a sentence. With spoken form, we currently rely on commercial speech recognition, which by design is hard-coded to a specific language.