We’ve been watching the development of modern Large Language Models (LLM) like GPT with keen interest and some awe. Like much of the tech world we’ve been fascinated by ChatGPT and the mix of impressive results, some errors and some outright making (convincing) stuff up, that it delivers
As others have noticed, ChatGPT has a surprisingly good knowledge of linked data and RDF and is able to synthesise examples of RDF data and SPARQL queries. Inspired by this we’ve been exploring the use of ChatGPT in modelling and ontology design. This #TechTalk post explores the question: How close is it to being a useful assistant?
First stab at an ontology
To try this out I picked the domain of representing organisational structure, something we’re pretty familiar with thanks to our work on developing the org ontology.
So let’s get started:
Generate an ontology to describe organisations. It should be able to represent an organisation, the locations at which it was based, the employees in the organisation and the roles that they play. Please express the ontology in Turtle syntax.
Dave
Here’s an example of an ontology in Turtle syntax to describe organizations:
@prefix org: <http://www.example.org/ontology/organization#> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . org:Organization a rdfs:Class . org:hasLocation a rdf:Property ; rdfs:domain org:Organization ; rdfs:range org:Location . org:hasEmployee a rdf:Property ; rdfs:domain org:Organization ; rdfs:range org:Employee . org:Location a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:Location ; rdfs:range xsd:string . org:Employee a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:Employee ; rdfs:range xsd:string . org:hasRole a rdf:Property ; rdfs:domain org:Employee ; rdfs:range org:Role . org:Role a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:Role ; rdfs:range xsd:string .
This ontology defines several classes (Organization, Location, Employee, Role) and properties (hasLocation, hasEmployee, hasName, hasRole) to describe organizations and their components. The Organization class is related to the Location class through the hasLocation property, and similarly, the Organization class is related to the Employee class through the hasEmployee property. The Employee class is related to the Role class through the hasRole property. All classes have a hasName property to represent the name of the organization, location, employee, or role.
ChatGPT
So that’s a pretty interesting start!
It’s syntactically legal Turtle and is a plausible first stab, if a little simple. It includes syntactic niceties like picking a sensible prefix org: (maybe because there’ll be examples of the use of the W3C ontology, and the specs themselves, in its corpus). As a way to avoid having to look up all the annoying URI prefixes and create some starting Turtle this is actually useful. The results certainly appeared faster than I could have typed them myself.
It’s also interesting that rather than finding and reusing ontologies or relevant existing vocabulary elements that are in its corpus (like schema.org) it has “chosen” to write an ontology from scratch. I did ask it to “generate” one but often what ChatGPT “generates” seems to be a recognisable mutation from material that’s likely in its corpus.
For naming the classes and properties it has chosen to use the terms in the prompt, which is reasonable. Though it has chosen the American/Oxford-English spelling for Organisation which is no doubt much more common in its Corpus!
It hasn’t provided any properties or relationships beyond the minimal ones mentioned in the prompt, though it has provided for naming everything through hasName. There’s also a rather deeper issue with choice of classes v.s. properties that we’ll come back to. However, first let’s look at a basic problem with this design – that hasName property.
To someone not familiar with RDFS and OWL (e.g. someone who might try a task like this for real with ChatGPT) those definitions for hasName might look reasonable. However, using rdfs:domain, rather than OWL restrictions, means that there are unintended consequences. Using those declarations, anything with a name must be in the domain of all of Role, Employee, and Location. So anything with a name can be deduced to be a Role and an Employee and a location (it’s in the intersection of all of them). More than likely we would regard all those classes as disjoint (with an empty intersection) and we’d certainly want the option to have be able to have things be one or the other but not all of them yet still give them names.
Fixing the domain declarations
That definition of `hasName` doesn’t seem quite right. Has name has been given the domain of `Organization` and the domain of `Employee` and the domain of `Location`. Doesn’t that mean that anything with a `hasName` property will be treated as both an `Organization` and an `Employee` and a `Location`?
Dave
You are correct. That definition of hasName is not appropriate, as it is giving multiple domains to the same property, which would lead to ambiguity in the ontology. To correct this, we can either give hasName separate properties for each class or define a common class with the hasName property and make each class (Organization, Employee, Location, Role) a subclass of it.
Here’s an updated version of the ontology:
@prefix org: <http://www.example.org/ontology/organization#> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . org:Organization a rdfs:Class . org:hasLocation a rdf:Property ; rdfs:domain org:Organization ; rdfs:range org:Location . org:hasEmployee a rdf:Property ; rdfs:domain org:Organization ; rdfs:range org:Employee . org:Location a rdfs:Class . org:Employee a rdfs:Class . org:hasRole a rdf:Property ; rdfs:domain org:Employee ; rdfs:range org:Role . org:Role a rdfs:Class . org:NamedEntity a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:NamedEntity ; rdfs:range xsd:string . org:Organization rdfs:subClassOf org:NamedEntity . org:Location rdfs:subClassOf org:NamedEntity . org:Employee rdfs:subClassOf org:NamedEntity . org:Role rdfs:subClassOf org:NamedEntity .
With this updated ontology, the hasName property is defined for the common class NamedEntity, and each class (Organization, Employee, Location, Role) is a subclass of NamedEntity. This ensures that the hasName property is properly defined for each class without ambiguity.
ChatGPT
So this is a very good response. We had to do the hard work of noticing the problem, but ChatGPT has applied an appropriate fix while staying within RDFS. It’s interesting that it didn’t use OWL restrictions which might have been better still, we’ll come back to that later.
Also I was wrong to say in the prompt that Organization
was one of the domains of hasName
in the first draft; but, in mentioning that in the prompt, ChatGPT has now made Organization
a NamedEntity
as well, which is fine.
What about geography then
It’s still a very limited ontology. In particular, location is treated as just another named thing. While named places is a reasonable starting point you might expect it to know that locations generally have spatial coordinates and have made some provision for that.
[From now on I’ll abbreviate some of the responses to just show the changes in the developing ontology.]
Locations are geographic in nature. The ontology so far only assigns a name for the location. How would you modify it to represent the geographic location?
Dave
We can modify the ontology to represent the geographic location of a Location by adding a property for latitude and longitude. To do this, we can add two properties, hasLatitude and hasLongitude, with a range of xsd:double, as geographic coordinates are usually represented as decimal numbers. Here’s an updated version of the ontology:
… org:hasLatitude a rdf:Property ; rdfs:domain org:Location ; rdfs:range xsd:double . org:hasLongitude a rdf:Property ; rdfs:domain org:Location ; rdfs:range xsd:double . …
With this updated ontology, the geographic location of a Location can be represented by its latitude and longitude, as well as its name.
ChatGPT
Wot, no reuse?
So a perfectly good, if simplistic, fix. Though at this point I’m getting frustrated it keeps making up its own terms and call it out on that.
That modification is correct but you have created new properties to represent the latitude and longitude. Are there alternative widely used RDFS or OWL vocabularies that you could have used to provide those properties?
Dave
Yes, there are alternative widely used RDFS or OWL vocabularies that can be used to represent geographic coordinates. For example, you could use the WGS 84 vocabulary, which is widely used for representing geographic information in the Semantic Web. The vocabulary defines the properties lat and long for representing the latitude and longitude of a location, respectively.
Here’s an updated version of the ontology using the WGS 84 vocabulary:
@prefix org: <http://www.example.org/ontology/organization#> . @prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> . … org:Location rdfs:subClassOf org:NamedEntity, geo:SpatialThing . …
With this updated ontology, the geographic location of a Location can be represented by its latitude and longitude, which are provided by the WGS 84 vocabulary, and its name.
ChatGPT
Again a good fix but would have been better if it had chosen this approach without prompting. Reuse is definitely not its strong point!
What would happen if we had started in our original prompt with more of an emphasis on reuse? Could it have done more things like reuse geo
?
To probe this I started a separate dialog with ChatGPT:
Generate an ontology to describe organisations. It should be able to represent an organisation, the locations at which it was based, the employees in the organisation and the roles that they play. Please express the ontology in Turtle syntax. Please reuse as many public ontologies and vocabularies that you think are appropriate so that your new ontology will interoperate with other data.
Dave
Here is an example of an ontology for organizations in Turtle syntax, reusing vocabulary from several public ontologies such as Dublin Core, Schema.org, and FOAF:
@prefix dc: <http://purl.org/dc/terms/> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . @prefix org: <http://www.w3.org/ns/org#> . @prefix schema: <http://schema.org/> . <http://example.org/Organization/ACME> a org:Organization, schema:Organization ; dc:title "ACME Inc." ; org:identifier "ACME-123456" ; schema:name "ACME Inc." ; schema:description "ACME is a multinational corporation specializing in various products and services." ; schema:email "info@acme.com" ; schema:telephone "(555) 123-4567" ; schema:url <http://www.acme.com> ; schema:address [ a schema:PostalAddress ; schema:streetAddress "123 Main St." ; schema:addressLocality "Metropolis" ; schema:addressRegion "IL" ; schema:postalCode "12345" ; schema:addressCountry "USA" ] . <http://example.org/Organization/ACME/Location/Metropolis> a org:Site, schema:Place ; dc:title "ACME Metropolis Location" ; schema:name "ACME Metropolis Location" ; schema:description "ACME's main headquarters located in Metropolis." ; schema:address [ a schema:PostalAddress ; schema:streetAddress "123 Main St." ; schema:addressLocality "Metropolis" ; schema:addressRegion "IL" ; schema:postalCode "12345" ; schema:addressCountry "USA" ] ; org:hasSite <http://example.org/Organization/ACME> . <http://example.org/Organization/ACME/Employee/JohnDoe> a foaf:Person, schema:Person ; dc:title "John Doe" ; schema:name "John Doe" ; schema:givenName "John" ; schema:familyName "Doe" ; schema:email "johndoe@acme.com" ; schema:telephone "(555) 555-5555" ; org:memberOf <http://example.org/Organization/ACME> ; org:role [ a org:Role ; org:roleOf <http://example.org/Organization/ACME/Employee/JohnDoe> ; org:roleName "Project Manager" ] .
This ontology defines an organization (ACME), its locations (Metropolis), and an employee (John Doe) and their roles within the organization. The ontology reuses vocabularies from Dublin Core, Schema.org, and FOAF to provide a standard way of describing the organization and its components.
ChatGPT
Now this is curious and really interesting. Having asked it to reuse vocabularies then it was able to represent very sensible example organisational data (plucked from who knows where *) without needing to develop any new terms at all. It even used terms from our own org ontology, though didn’t mention that in its text summary.
* note – I have tried to find this example on the web but without any luck so far.
On the one hand, this is potentially a very useful response showing the questioner that they don’t need any new terms at all and showing them how to use existing terms. On the other hand, we did ask for an ontology and (philosophical debates about A-Box only ontologies aside) we didn’t really get one – just example instance data.
Employee shouldn’t be a class, employments change
Going back to our original dialogue there is one important, deeper problem with the draft ontology so far. It has “chosen” to represent an employee using a class which is not great. Normally we would use a class for a characteristic that is intrinsic to its members, for example being a Person, whereas I can be an employee or not at different times and for different organisations.
The ontology represents a fixed relationship between an `Employee` and their `Role`. This has two limitations. Firstly, an `Employee` might be a member of multiple organisations and we would want to make the relationship specific to the organisation. Secondly, the role of employees will change over time. How could you change the ontology to address these limitations?
Dave
To address the limitations, you can create a new class called Employment that represents the relationship between an Employee and their Role in a specific Organization. The class Employment would have properties employee, role, and organization to represent the employee, their role, and the organization they work for, respectively.
Here’s an updated version of the ontology to reflect these changes:
… org:hasEmployment a rdf:Property ; rdfs:domain org:Organization ; rdfs:range org:Employment . org:Employment a rdfs:Class . org:employee a rdf:Property ; rdfs:domain org:Employment ; rdfs:range org:Employee . org:role a rdf:Property ; rdfs:domain org:Employment ; rdfs:range org:Role . org:organization a rdf:Property ; rdfs:domain org:Employment ; rdfs:range org:Organization . org:Employee a rdfs:Class . org:Employment rdfs:subClassOf org:NamedEntity . …
With this updated ontology, the relationship between an Employee and their Role is no longer fixed, but is represented as an instance of the class Employment. This allows for the representation of multiple employments for a single employee, and for changes in the role of an employee over time, as each employment relationship can be a distinct instance.
ChatGPT
So it introduced a class to represent the n-ary relationship of Employment which is a good solution. Given I explicitly mentioned change over time in the prompt it would have been nice if that had been more explicitly included. Though we can prompt it to do that.
While that change does address some of the limitations it does not allow us to represent when a given `Employment` was in force. How would you further extend the ontology to handle this case?
Dave
To handle the representation of when a given Employment was in force, you can add a new property startDate and endDate to the class Employment. The property startDate would represent the start date of the employment and the property endDate would represent the end date of the employment.
ChatGPT… org:startDate a rdf:Property ; rdfs:domain org:Employment ; rdfs:range xsd:dateTime . org:endDate a rdf:Property ; rdfs:domain org:Employment ; rdfs:range xsd:dateTime .
Critique thyself
At this point we have the start of a small but useful example ontology. To go much further we would need to figure out what else about the world of organisations would be useful to include in the modelling. Could ChatGPT help us with that?
This is useful starting ontology for representing organisations. Please critique the design so far to identify what other limitations it has that should be considered.
Dave
The design so far is a good starting point for representing organizations and their related information, but it has a few limitations that should be considered:
- Representation of hierarchical relationships: The current design does not capture the hierarchical relationships between different roles within an organization. For example, a Manager might be a higher role compared to an Employee.
- Representation of departmental information: The current design does not represent information about different departments within an organization and their relationship to each other.
- Representation of address information: The current design only captures the geographic location of a location but does not represent the detailed address information of a location such as the street name, city, state, and postal code.
- Representation of financial information: The current design does not represent financial information about an organization, such as its revenue, expenses, and profits.
- Representation of company information: The current design does not capture information such as the type of company, its history, mission, and vision.
- Representation of skills and qualifications: The current design does not capture the skills and qualifications of employees, which can be important information when representing an organization.
- Representation of legal information: The current design does not represent legal information about an organization, such as its incorporation date, legal status, and registration number.
ChatGPT
- These limitations can be addressed by adding additional classes, properties, and relationships to the ontology to provide a more complete representation of organizations and their information.
Now this is spectacular! It has pulled together an excellent checklist of aspects of organisations that a full ontology might need to represent. There are many discussions on the nature of organisations, and the more specific problems of representing them in ontologies, that are in its corpus that it had to draw on. So it’s not outright magic. However, pulling those together into such a comprehensive, clear and well structured summary is both impressive and very helpful.
But what about meaning and inference?
Before we wrap up I wanted to go back to the first issue we asked it to address – the consequences of having multiple domain declarations for a property. This raises all sorts of questions.
To what extent does ChatGPTs model have some representation of this inference?
If it had started out using OWL, would it have avoided this trap by using OWL restrictions instead of the global domain/range machinery from RDFS?
To probe the first question I started another dialogue with ChatGPT, trying to seed it with the ontology it had generated and see if it could critique it at a technical level.
This an ontology. What is it about?
Dave@prefix org: <http://www.example.org/ontology/organization#> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . org:Organization a rdfs:Class . org:hasLocation a rdf:Property ; rdfs:domain org:Organization ; rdfs:range org:Location . org:hasEmployee a rdf:Property ; rdfs:domain org:Organization ; rdfs:range org:Employee . org:hasName a rdf:Property ; rdfs:domain org:Organization ; rdfs:range xsd:string . org:Location a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:Location ; rdfs:range xsd:string . org:Employee a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:Employee ; rdfs:range xsd:string . org:hasRole a rdf:Property ; rdfs:domain org:Employee ; rdfs:range org:Role . org:Role a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:Role ; rdfs:range xsd:string .
This ontology defines a set of classes and properties for modeling information about organizations. The classes defined are Organization, Location, Employee, and Role. The properties defined describe relationships between these classes such as hasLocation, hasEmployee, hasName, hasRole, etc. The properties also have a defined domain and range indicating the type of the subject and object of the property respectively.
ChatGPT
Seems like we have a reasonable starting point, so can we use ChatGPT to discover the implication of the multiple-domain declarations.
It should be possible for things to be organisations without being employees, or locations. Is that possible in this ontology?
Dave
Yes, it is possible for things to be organizations without being employees or locations in this ontology. The ontology defines the class “Organization” and the properties “hasLocation” and “hasEmployee”, but does not state that all organizations must have a location or an employee. The properties are optional and only apply to organizations that have a location or an employee, respectively.
ChatGPT
True, let’s be clearer that the names are the potential problem.
Is it possible for things to have names in this ontology and still allow them to be organisations without being employees or locations?
Dave
Yes, it is possible for things to be organizations without being employees or locations in this ontology. This is because the classes Employee and Location are only defined as the range of properties (hasEmployee and hasLocation respectively) and are not mandatory for an instance of the class Organization. In other words, an instance of Organization can exist without having an instance of Employee or Location associated with it.
ChatGPT
Maybe a concrete example would help?
If I add the following additional facts to the ontology:
<http://example.com/acme> org:hasName "Acme" ; a org:Organization .
what else can we deduce about
Dave<http://example.com/acme>
?
With the additional fact, we can deduce that <http://example.com/acme> is an instance of the class Organization as it is specified with the a property. We can also deduce that the name of this instance is “Acme”, as specified by the org:hasName property. However, we cannot deduce anything about the properties org:hasLocation and org:hasEmployee as there is no information provided about them in this additional fact.
ChatGPT
So no sign of an inference going on here. It’s playing back the example facts but not the consequences of the domain declarations in the ontology. Maybe it’s focusing too much on the example in the question and ignoring the ontology at the start of the dialogue?
That answer is not complete. I would like you to combine these new facts with the original ontology and tell what can be deduced from that combination.
Dave
When combining the additional facts with the original ontology, we can deduce the following information about
<http://example.com/acme>
:ChatGPT
- It is an instance of the class Organization, as specified by the a property.
- Its name is “Acme”, as specified by the
org:hasName
property.- We cannot deduce anything about its location or employees, as there is no information provided in the additional fact or the original ontology about the properties
org:hasLocation
andorg:hasEmployee
.
So again, replaying the presented axioms in English and still no sign of deducing the consequence of a domain restriction.
If we start a new dialogue and prune the question down to its basics that we can get some deductions:
The follow statements are given in the RDFS language using turtle syntax:
@prefix org: <http://www.example.org/ontology/organization#> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . org:Organization a rdfs:Class . org:hasName a rdf:Property ; rdfs:domain org:Organization ; rdfs:range xsd:string . <http://example.com/acme> org:hasName "Acme".
What additional facts can be deduced about `<http://example.com/acme>`?
Dave
From the given RDFS statements, we can deduce that <http://example.com/acme> is an instance of the org:Organization class and has a property org:hasName with a value of “Acme”. However, no other additional facts can be deduced.
ChatGPT
So the immediate consequences of a single domain declaration are comprehensible to it in this limited case, but not in the not-that-much-bigger ontology example.
Hinting about OWL?
Finally, in all the original dialogues we mentioned Turtle syntax but didn’t specifically ask for use of RDFS. If we explicitly steer it towards OWL will that change its modelling style at all, possibly encouraging use of OWL restrictions which have less global implications?
Generate an ontology to describe organisations. It should be able to represent an organisation, the locations at which it was based, the employees in the organisation and the roles that they play. Please express the ontology in OWL using Turtle syntax.
Dave
Here is a sample OWL ontology to describe organizations, their locations, employees, and roles in Turtle syntax:
ChatGPT@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix org: <http://www.example.org/ontology/Organization#> . <http://www.example.org/ontology/Organization> a owl:Ontology ; rdfs:label "Organization ontology"@en . org:Organization a owl:Class ; rdfs:label "Organization"@en . org:Location a owl:Class ; rdfs:label "Location"@en . org:Employee a owl:Class ; rdfs:label "Employee"@en . org:Role a owl:Class ; rdfs:label "Role"@en . org:hasLocation a owl:ObjectProperty ; rdfs:domain org:Organization ; rdfs:range org:Location ; rdfs:label "has location"@en . org:hasEmployee a owl:ObjectProperty ; rdfs:domain org:Organization ; rdfs:range org:Employee ; rdfs:label "has employee"@en . org:worksIn a owl:ObjectProperty ; rdfs:domain org:Employee ; rdfs:range org:Organization ; rdfs:label "works in"@en . org:hasRole a owl:ObjectProperty ; rdfs:domain org:Employee ; rdfs:range org:Role ; rdfs:label "has role"@en .
So a superficially different style but no really big substantive differences. It has added some documentation, both a top level label and labels for each class and property. It’s also added a likely inverse to hasEmployee
(worksIn
) though it has not stated that as being an inverse, which would have been nice. It has also not bothered to add any explicit support for hasName
this time so while it has stuck with using domain/range restrictions there’s no immediate consequences of that to follow up on.
So what?
So what can we conclude from these simple experiments? Other than that you can lose lots of time playing with ChatGPT!
Like most ChatGPT examples it’s really impressive what it can achieve but ultimately what it does generate is somewhat superficial and of limited use for practical work in its current state.
It’s not developed (or is not revealing) any really solid model of inference or consequences of design decisions to act as an “intelligent” aid. But it is good enough at generating legal examples and responding to requests for changes that it could become a useful assistant, reducing drudge work, if it could cope with larger examples and if combined with some stronger validation component.
This could potentially make it a useful training aid – not on its own but if paired with a human trainer. It could allow a trainee to quickly generate examples without the need for quite so much typing and worrying about fiddly syntax details. The sort of limitations and issues in the ontology designs that I was pointing out to ChatGPT are exactly what you would want a trainee to learn to spot and correct. So you would be teaching people to interpret and critique what ChatGPT generates, and use it as a tool to reduce the tedium of iterating a design. That critical evaluation of draft ontologies is a key skill to learn and having the trainee critique ChatGPT’s output may cause less offence than having the teacher continually critiquing the trainee’s examples!
A really interesting possibility would be to build a custom trained OntoGPT. Bear in mind that the difference between GPT-3, the underlying large language model, and ChatGPT boils down three steps:
- Fine tuning of parameters based on supervised learning on (I believe) 13,000 manually labelled examples.
- Building a reward model based on labelling an ensemble of responses for test prompts.
- Reinforcement learning using the reward model from step 2 on randomly generated prompts.
You could imagine automating step 2 in this domain using an actual inference engine to validate the proposed ontologies, plus some semi-automated critique based on methodologies like OntoClean. That could enable a useful amount of further reinforcement learning without the need to generate tens of thousands of manual examples for supervised learning.
It would certainly be interesting to find out if that would result in any more explicit ability to make deductions from starting axioms.
The step in the dialogues that really stands out, though, is when we asked it to critique its own ontology. Its summary of features of organisations that you might want to think about modelling was excellent. You could well imagine that, when starting on modelling for a domain you are not yet familiar with, then ChatGPT could already be a valuable tool to help you identify common concepts and issues to be aware of. Used that way, more as a prompt and quick literature miner, then errors and omissions are much less serious.
So data modellers are probably not out of a job yet. Though consultants should be really worried – this thing is spectacularly good at making up convincing (not necessarily accurate) answers on the spot, generally based on what you told it in asking the question!