AI Factory 1.0 created by
ERGO Technology & Services S.A. for ERGO

Efficient Artificial Intelligence meets professional IT

The AI Factory is a modular cloud-based solution for state of the art AI Model Development and the enabler for secure and efficient operationalization and real-time integration into existing business processes.

Learn More
mission
The AI Factory establishes a common and compliant platform for running production ready AI Use Cases as well as providing a flexible model development and testing environment.
about

Key Features

This solution allows us to build on the promise of Artificial Intelligence to become a professionally developed and managed part of our business process landscape. By providing this outstanding and cutting edge infrastructure, business units will be empowered to reach new goals.

Cloud-based and fully compliant

Built on top of AWS (Amazon Web Services) the AI Factory allows any business unit to take full advantage of cloud computing from day one - in a fully compliant manner. The AI Factory especially addresses the needs of the European finance industry (e.g. IT-Security, GDPR-readiness for handling personal data, multi-tenancy & distinct user roles).

Flexible and transparent

Scale up and scale down with little changes to your code. Switch toolsets and servers instantaneously. Profit from sophisticated monitoring and dashboards. Through the provision of various tools and a vast array of services, your data analytics teams will experience a new level of effectiveness.

Professional and advanced

The AI Factory is a state of the art IT-platform that makes implementation of AI models a lot easier, although it is still an environment for dedicated subject matter experts. The approach takes into account the growing importance of AI and the need to tackle related challenges in a professional and efficient manner.

how it works

How does our AI Factory support the Machine Learning processes?

The AI Factory allows data professionals to use state of the art tools for model development and greatly improves the analysis and processing of large amounts of data in the course of their daily work. After successfully accomplishing complex modeling tasks, operationalization leverages the full power of sophisticated ML solutions.

platform

Why Cloud?

The underlying cloud platform provides a fully managed infrastructure that scales automatically, integrates with the latest data technology and comes equipped with many built-in optimizations.This enables developers and data scientists to quickly build and deploy models without worrying about the underlying sophisticated infrastructure.
Costs
Pay-per-use model eliminates the need to invest in expensive physical infrastructure, large up-front license costs and ongoing maintenance fees. It eliminates overcapacity and wasted computing resources and makes it easy to access more sophisticated capabilities without the need to bring in new advanced hardware.  
Speed and consistency
Consequently following the design paradigm of infrastructure as code enables a safe, efficient and controlled setup of components. Deployment of new workspaces for data professionals becomes a repeatable and consistent procedure.
Efficiency
Using cloud-services facilitates model building and training by providing pre-build and configured development environments and implementations of open-source algorithms.
Security
Monitoring is a critical component of cloud security and management. The platform relies on cloud monitoring solutions and on-premise infrastructure to detect security threats and anomalies of deployed services, models and underlying infrastructure by continuous assessment and measurement of behavior.

Architecture overview

Model Development Environment
Created to leverage daily activities of Data Scientists and Data Engineers. Used for data wrangling, development of data transformation pipelines and creation of AI models.
Operationalization Environment
Enables production, launch and operationalization of AI models created within the Model Development Environment. Provisioning of easy to implement, dedicated, fault tolerant and scalable AI micro services.
Monitoring Services
Created to monitor AI services on a real-time and regular basis. Used to investigate on the business value, to ensure reliability of the AI service and to get insights for further improvement of the AI model.
Support Layer
This layer enables access to the AI platform from one place. Daily tasks can be performed efficiently and without the need to configure additional services on your local machine.
Security Layer
This layer is responsible for ensuring high security standards within the AI factory. The main aspects are control of authentication, authorization and ensuring data consistency within the platform. It protects all elements of the solution.
Logging and Monitoring Layer
This layer allows you to view application logs and create notifications from one place, without having to log in separately for subsequent services and manual aggregation. It also enables you to integrate with global monitoring and logging solutions of existing on-premise systems.
Operations Layer
This layer enables launching a number of services in an environment that allows their maintenance, scaling and operation. It enables isolating computing layers from storage. Due to this architecture pattern, the platform is resistant to resource contention. It also enables running data pipelines over transient environments that will be created only for the purpose of a specific task.
Provisioning Layer
This layer is responsible for providing the core elements that enable building a generic, scalable, extensible architecture for services and infrastructure. Choosing Infrastructure as a code as the entry point enables easy management of deployment pipelines as well as simplification of the maintenance processes running on the platform.
our stack

Technologies we use

use cases

Advance Analytics Use Cases

Below is the list of AI models (use cases) already deployed on the AI Cloud Platform
AI-based E-Mail classification and routing
before
  • 35% of 300,000 incoming emails addressed wrongly or centrally.
  • Manual re-routing increased processing time and reduced straight through processing (STP) ratio.
after
  • AI e-mail classifier automatically reroutes e-mails based on content.
  • Manual effort significantly reduced.
benefits
  • Faster processing of customer requests.
  • Increased efficiency due to reduction of time spent on irrelevant e-mails.
  • Cost reduction due to increased STP ratio.
AI-based Health Claims Outpatient Medical Bill Bouncer
before
  • 17% of 5.7 million outpatient medical bills selected for additional manual verification for technical reasons (rules engine) or amount thresholds.
  • Manual verification increases processing time and reduces straight through processing (STP) ratio.
after
  • AI Bouncer module suppresses uneconomical requests and reduces the manual verification currently done by the business teams.
benefits
  • Faster processing of customer requests.
  • Increased accuracy and reduced manual work.
  • Cost reduction due to increased STP ratio
evolution

Evolution of the AI Factory

Stage 1 (completed)
Back in 2017 it all started with an Hadoop-based on-premise AI model development platform.

Stage 2 (completed)
At the end of 2019 a hybrid scenario with an on-premise/cloud mix was in place, mainly to support GPU-intensive AI training jobs.

Stage 3 (Current)
By the end of 2020 a fully cloud based AI cloud platform for both model development and operationalization of production ready AI use cases was established.

Stage 4 (Future)
Sharing is caring. We are working hard to make the AI Factory available to ERGOs international subsidiaries. Even more, our unique approach might also support the needs of other industries.

On-Premise
Hybrid Cloud
Public Cloud

Contact Us