Are you curious to find out what we did in 2019? Jerome, our A.I. practice lead wrote a summary of everything we’ve been working on the past year in terms of research, development & projects, as well as a thank you to all the teams we’ve worked with, including some fantastic interns from Ecole 19 and KULeuven.
Strategy & Tools
Our strategy this year was based on developing “NoAI” A.I. (Not Only Artificial Intelligence), and broadening our focus beyond machine learning, under the idea that a true A.I. solution should always bridge multiple algorithms.
To achieve this goal we worked on various axes:
- Evaluating cloud offerings (Google, Watson, …)
- Evaluating & testing various A.I. frameworks & tools
- In-house solution development
- In-house component development
- Client projects & assessments
- Creating an implementation framework for projects
Evaluating Cloud Offerings
We tried multiple offerings, and ended up keeping two: IBM Watson, which performed great for emotive computing, and Google for Images and NLP (with Microsoft as a strong contender).
These platforms were mainly used to create training data sets, and are rarely used in production environments (preferring 100% on-premise), unless in very rare cases. (Tone Analyser for web scraping & Google Translate or Dialog Flow linked with Chatfuel)
If you want to find out more about our A.I. practices, get in touch with Kevin D’hooge (email@example.com).
Evaluating A.I. frameworks & tools
Lots of work done with one clear winner: TensorFlow! Other products and frameworks that were kept around include:
- ElasticSearch + Kibana for the NoSQL parts
- Neo4J for Graphbase
- Sci-kit Learn with TensorFlow
- CUDA Nvidia
- C#, .NET, Python with Anaconda & VS 2019
These were our clear preferences, with handy integrations with Blue Prism & UiPath for Cognitive Automation.
On the NLP part, which was a priority this year, we tried a multitude of products and have kept essentially SPACY, linked with other supporting elements. Above all, we have created the necessary components to improve upon SPACY’s models.
Other products that were nice to use: Stanford NLP
In-house solution development
- An advanced chatbot
- An NLP server that indexes legal documents, extracts modalities and functions as an enterprise search engine
- A web scraper which applies different analyses to the data it scrapes
Client projects & assessments
2019 was the year in which A.I. really kicked off at BrightKnight with:
- An advanced search engine & contract analysis tool for the Procurement department of one of Belgium’s largest enterprises
- An analysis tool for company closure documents for an insurance company
- Assessments for a utility company
And a variety of projects in the pipeline.
The belief behind these components is that A.I. can easily be industrialised, and that design & data determine the solution. Not the underlying code!
As mentioned earlier, we primarily developed agents. A non-exhaustive list is below:
- Template for Python REST agents (pre-implementation of monitoring, error handling,…)
- Template for C# REST agent (pre-implementation of monitoring, error handling,…)
- Classifying agent (TensorFlow)
- Connection Agent for ElasticSearch + Neo4J (first for data, second for storage of data relationships)
- Modality extraction agent (Tensorflow): identification (name, family name, address, birthdate, birthplace), detection of contracts,
- Basic NLP agents: distance, geographical distance, language detection, translation…
- Agent-generating agent (incubator)
- Image-analysis agent (CNN)
- Input-normalising agent for chatbots (RNN)
- Negotiation agent (OpenAI, Reinforcement Learning)
- Advanced RegEx agent
- Taxonomy/ontology handling agents
Creating an implementation framework for projects
We approached projects with a method built up in previous projects and reinforced at BrightKnight:
- Always start with a comprehensive assessment to determine the collective & individual maturity and knowledge of the client, & defining a vision, governance & platform accepted by all stakeholders
- Bring own infrastructure to guarantee the necessary processing power. Taking into account confidentiality (NDA, SoD, external HDD’s which do not leave client premises,…)
With SVM, Fuzzy Logic and many others still in the pipeline, our goal is clear: create enough components to be able to use them as Lego building blocks, with every solution being a design mixed with various generic components.
No PoC’s! A.I. concepts are mature and have been confirmed for, in some cases, decades ago. The concept does not need proving. However, educating clients & partners is essential. A.I. is a field of its own and is not learned in a matter of weeks.
Machine Learning is not A.I. (an amazing part, but not enough).
Effort is needed to mature A.I. capabilities to the point where they become a tangible advantage.
Special thanks go out to Vincent Cardon & Ronny Neckebroek, as well as many other fantastic people who have collaborated with Jerome: Charles-Antoine, Sébastien, Mickael, Sofiane (all 4 from Ecole 19) but also Ruben, Pieter & Brecht (from KuLeuven). Not forgetting Nikita from Russia, Daniel from the US, Omer,…
Artificial Intelligence starts with human intelligence… and Kinder Surprise eggs. (A tradition ongoing since 2014!)
That’s it for 2019… Bring on 2020!
– Jerome Fortias