AI projects we’re proud of

See how some of our clients solved their business challenges with AI and machine learning.


We created an algorithm that tests Facebook Ads retargeting campaigns based on different variables and analyzes past data to boost conversions and sales from Facebook while decreasing campaign costs.

Problem: Marketers needed a tool for creating effective Facebook Ad campaigns and optimizing their performance based on data, not just a hunch.

What we did to solve it?
Step one: Creating a predictive model
  • The model uses historical data to identify the most relevant target group for the product in question.
  • It finds the highest relevance score to increase the reach of the ad while reducing campaign costs.
Step two: Adding extra variables
  • We included additional variables like ROAS and CPC, allowing the client to optimize the campaign even more.
Step three: Create a predictive model for new variables
  • The algorithm showed users how to create the most effective campaign given the products on sale
The algorithm used historical data to identify the correlations between group features and ad performance, increasing overall effectiveness while decreasing costs.
The AI-based enhancements are designed to help stores sell more products, improve the effectiveness of their marketing campaigns, and avoid overspending. They also help lower customer churn, improve engagement, and, ultimately, increase revenues.


We used machine learning to assign attribution across marketing channels and allow marketers to connect data from various sources. This helped them better understand their audience and the customer journey from click to conversion. All in a comprehensive marketing data dashboard that collects real-time data and gives valuable insights into what’s driving campaign results and delivering ROI.

Problem: Data across marketing channels was too distributed. Teams needes a tool to compare insights in one place: to evaluate conversion while still accounting for the cost of each channel.

Step one: Design and development
  • First, we deployed a tool to monitor user behavior across client platforms.
  • We tested it and tweaked its performance
  • We built out the aggregation ruleset
Step two: Attribution and visualization
  • We developed the algorithms to assess attribution across marketing channels
  • We built a visualization dashboard using R Shiny

The algorithm shows which actions impact sales the most, where clients can cut costs, and how a budget increase can improve performance, while highlighting new opportunities in Google Analytics.


Problem: No available tool for automated online monitoring of a brand logo appearance in images posted in social media.

Solutions: A continuously trained ML algorithm that can detect the brand logo in an image. Additionally, a tool to generate synthetic data necessary to train the algorithm in the first instance.

Step one: Generate synthetic data
  • We developed a tool that could use a logo image to create a synthetic dataset
  • The tool could segment images, perform depth-testing, and carry out surface estimation
  • The tool then applied the logos with the correct size and perspective, placing them in the appropriate segment of the image
  • The tool generated tens-of-thousands of images we could use to train the algorithm
Step two: Train the algorithm
  • We used deep learning and an array of other technologies to design and train the final algorithm
  • The new tool learned to detect logos thanks to the synthetic dataset
Step three: Build a verification tool
  • The user required a verification tool to validate when the algorithm correctly identifies a logo
  • The algorithm uses this second, more precise data set to improve the accuracy of its logo detection
DLabs created an algorithm that recognized brand logos in any online image so that companies can monitor brand reach and engagement, then contact influencers without any waste of time or money.
The system works with 60% accuracy, which makes it the most effective algorithm of its type.

Platform that automates the process of invoice reconciliation.

Problem: Excessive amount of manual labour involved in time-consuming invoice processing

Solutions: Automatic invoice classification streamlined a repetitive business process, enhancing output by using advanced machine learning algorithms.

The customer had to classify over 20,000 invoices manually, primarily to help with accounting and support with decision-making. The tool auto-classified over 83% of all invoices, significantly reducing the manual overhead of a time-consuming task.
Implemented solution removed the manual effort involved in classifying invoices, helping the accounting department to better track costs.
Step one: Remove manual labor
  • The client had to process 80,000 invoices in total, out of which over 20,000 were done manually
  • The new tool means 83% of invoices are processed automatically
  • This frees resource to focus on value-adding accounting tasks
Step two: Simplify process
  • The new setup has put power back in the decision-makers hands, simplifying a time-consuming business process through automation
Step three: Leverage new system
  • In combining machine learning techniques with accounting expertise, we’ve developed a unique system that enables reaching optimal results


Problem: Automated detection of the most engaging posts and content in different industries

Solutions: A solution that automatically detects the most engaging content from across the web for any given category based on a defined taxonomy.

The aim of the project was to demonstrate the relationship between user content engagement and its semantic, syntactic features. DLabs also analyzed social media posts from Facebook and Twitter. We’ve built a tool to help social media managers identify which posts perform best in their specific industry, helping clients source and create compelling content without a huge investment of time.
Step one: Build dataset
  • We gathered data from Twitter and other social media platforms, then defined a taxonomy per industry
  • We worked with human tagging specialists to label data (tagging 10K mentions along 3 category levels — Level 1: 10 categories, level 2: 46, level 3: 29)
Step two: Design model
  • The customer knew their requirements. Nevertheless, to ensure success, we invested time defining their specific needs as well as constructing KPIs and tests
  • This ensured we designed the model that brings business value
Step three: Achieve a satisfactory accuracy level
  • Combining different machine learning techniques with expert knowledge and human inputs, we managed to train the algorithm above the assumed level of accuracy
  • Now, we can integrate the models into new product features


Problem: Difficulty with finding metadata such as director, actors, year, or genre in a program description and accurately porting that information to a new database.

Solutions: Hybrid extraction method based on regular expressions, linguistic rules, and statistical language models — using dependency analysis for the decomposition of templates.

The path to success:
DLabs built a solution to kill two birds with one stone. First, we looked to extract metadata and possible templates from existing descriptions. Then, we tackled the creation of the descriptions themselves.
We achieved 1-2% error rate in extracted metadata (depending on the field) with 2.5 metadata items extracted on average per description.
Step one: Clean the metadata
— We started by improving the metadata for training, finding external sources to fill and validate existing metadata extraction
Step two: Create pipeline for metadata extraction
— We then developed and trained algorithms to extract specific information from descriptions
Step three: Template analysis
— As a final step, we had to turn the description into an abstract sentence structure to allow us to reuse it within the template itself

Comprehensive application for tax returns in Germany

Problem: no possibility to settle taxes easily and quickly on the basis of received tax cards.

Solutions: an intuitive, comprehensive application for individual and business customers.

Extracting data from the tax card in just 13 seconds with 93% efficiency- impossible? Nobody likes taxes… But if they can be settled quickly and pleasantly. The latest machine learning technologies and neural networks used in the application save time in large organizations. Automation guarantees the streamlining processes in every company.

Among the many functions of the application, we can distinguish six solutions that have a significant impact on its high quality.

1. Selection of profiles

In order to maximally simplify and shorten the application process, only the profiles that concern it is selected for the user. In the first stages of work on the declaration, the user answers simple questions. Based on the information received, the application generates a form that is perfectly tailored to the customer.


2. Processing of the tax card

The application user is able to save a few minutes thanks to the tax card scanning function (scan with a mobile device). In just 13 seconds, the system extracts data with around 93% efficiency (for all fields) and inserts it into the appropriate fields of the form. This makes the process of settling tax returns by the application extremely fast and, thanks to the modern user interface – very intuitive.


3. Tax calculator

The app also includes a tax calculator, which converts the user’s refund amount or the necessary additional payment. Its value is recognized by the client shortly after the first data has been entered, and it is updated as the next steps are completed.


4. Multilingual

The application can be used in two language versions. It is available in English and Polish. Importantly, there is the possibility of switching them easily- there is no need to reload the browser.


5. Support for business clients

Additional solutions supporting the work of business clients save time in large organizations. On the basis of the uploaded file, containing tax cards, the system creates a list with new testimonies. It guides the user through each application, one after another. Business customers also receive an offer containing customized business terms, defined by application packages at attractive prices.

Thanks to the application of two different payment platforms, the user has the opportunity to choose the desired currency for the settlement of their tax applications.

Algorithm supporting media houses in the field of online advertising monitoring

Problem: unable to verify the number of views and clicks on your ads? The compliance check displays statistics of advertising campaigns from online advertising servers.

Solutions: an easy and efficient system that allows you to monitor your online advertising in a quick way - tracking page views and clicks and comparing with data provided by marketing agencies.

How to monitor online advertising? Are advertising campaigns displayed according to the statistics of advertising servers? The DLabs team found answers to the above questions. The written engine allows you to receive large amounts of data, then aggregate and process them. It also generates reports and shows data in real time. And all this is accompanied by Google and Onet certifications regarding traffic requirements.
One of DLabs’s clients, along with their business partners, noticed the need to check whether their advertising campaigns are displayed in accordance with the statistics of various ad servers. Ad servers count their own statistics, which results in frequent mendacity. The system has been equipped with functions based on data mining and the intuitive statistics system allows the generation of reports and which can be viewed in real time.

Interesting aspects of the project:

    • 6,000 queries per second – in the case of increased traffic.


    • Testing of multiple technologies, in order to decide which combination will bring the best results in the shortest time. Tested technologies: Twisted, Gevent, Redis, Consul, Docker Swarm, Docker Compose, ZMQ, Celery, RabbitMQ.


    • The best results were obtained by making use of opportunities from the combination of several technologies. In this case, the use of languages Python, R and C are linked to Redism, PostgreSQL, Nginx, Docker and many other technologies, which gives a unique quality.


    • Agile approach to the project. The project started with the definition of tests, the aim of which was to check whether selected technologies meet the requirements set by the client. The goal was to create an efficient system that would handle many thousands of connections per second and analyse them. After defining the tests, trials of each technology were started in order to find one that met all the requirements. Thanks to this, the team saved time on unnecessary development work.


Algorithm that allows precise targeting of users on Facebook

Problem: precise targeting of users on Facebook - creating a graphic creation that guarantees increased conversions.

Solutions: creating a system that allows tracking ads / users / pixels, generating creative briefs for graphic designers based on keywords and the target group of the campaign.

Do you use the promotion campaigns on Facebook? You think that you know who you are targeting to and what graphic creation will provide you the highest conversion? DLabs team knows how to increase CTR by over a hundreds percent! The created algorithm targets users on psychological features and suggests how to create an advertisement to ensure the highest probability of clicking. Thanks to DLabs, the Facebook ad gains a new dimension.
The system built by the DLabs team is a tool for precisely targeting Facebook users based on their psychological characteristics. Taking into account a short description of the campaign, ex basic demographic data (gender, age, city size etc.)and keywords describing the target group, the algorithm divides the targeted group into sub-groups with a specific psychological profile and aesthetic preferences. Each such subgroup is described in terms of creative abbreviations, in which the most important are: the way of searching for these users on Facebook and the type of content that is most interesting to them. The generated creative brief helps to create a visual advertisement that is maximally matched to the group, and thus guarantees increased conversion.
The algorithm is the result of a six-month work of psychologists, developers and the data science team. Due to social differences between nations, the system is currently focused exclusively on the Polish market.

Business Intelligence software for the European Higher School in Cracow

Problem: analysis of large data sets by employees - time-consuming work, less efficiency, higher chance of errors.

Solutions: algorithm analysing the profitability of directions at the university and reducing the cost of manual analysis based on large amounts of data.

The goal of the project was to create a tool integrating large and dispersed data sets. Next, to make their digital analysis take just a fraction of a second, not, as before, many weeks with the help of administration staff.
The system is an important part of the university’s activity. It allows them to verify the quality of education through the prism of graduates’ achievements in the labour market and also allows users to set business plans based on reliable forecasts regarding the profitability of future courses of study.
Diffused data in many sources was a big problem. This included numerous SQL databases, but also raw sources such as: text files, pdf files, e-documents and spreadsheets. The first step taken by DLabs team was to integrate these data sources into a data warehouse. Then, an interface for data mining was created in the system to easily generate indicators of student behaviour on the market. One of the basic aims of the project was to make the system intuitive and easy to use for every person from the university. As a result, the system gives more accurate analysis results (in real time), reducing the cost in time and money.

DiabetesLab project - Suguard mobile app

Problem: lack of a tool to help people with diabetes manage their illness, plan their diet and create training plans.

Solutions: One tool that uses personal data to simplify the self-management of diabetes: taking information from smart devices and medical equipment to present a holistic view of how to optimize your routine each day.

It is estimated that 415 million people in the world suffer from diabetes. The predicted increase in this number is estimated at approximately 642 million by 2040. Due to diabetes, one person (five million a year) dies every six seconds – more than HIV, tuberculosis and malaria combined. One of the 415 million patients is Przemysław Majewski, CEO of DLabs.
The technology created by DLabs team uses artificial intelligence to personalize diabetic therapy and help patients in their day-to-day management of this chronic disease. This solution will benefit people suffering from type 1 diabetes, because it requires intensive treatment and exceptional care, in particular during sports activities. In the next step, the team intends to adapt the solution to the needs of patients with type 2 diabetes and consider other group of clients. The aim of the project is to retrain the market, focusing on investments to prevent complications resulting from diabetes, which will avoid costly treatment and the risk of serious consequences.
Suguard is intended to help patients achieve stable blood glucose levels. In particular, it can help those patients who lead an active lifestyle, to improve their health, well-being and to reduce the cost of treatment. A handful of clients use Suguard daily. It offers 90% efficiency in predicting actions to stabilize blood-glucose levels, decreasing the risk of complications while helping users choose the right foods, activities, and medicines.

The members of our team of analysts and engineers also took part in the following projects:

Decision support system for the Fire Brigade

Problem: making strategic decisions in crisis situations under time pressure.

Solutions: a system that improves the effectiveness of rescue operations thanks to real-time data analysis.

The aim of this project was to improve the safety of firefighters and to optimize the effects of the rescue operations. The core of the project was to design a modern risk management system that would help the commander during the action in the building.
Chief Data Science Officer of the DLabs team participated in the creation of modern decision support tools and recommendations for the commanders of the State Fire Service during rescue and firefighting operations. At the heart of the system are activity detection algorithms and counting navigation using the IMU (inertial measurement unit).
The amount of information, time pressure and unimaginable responsibility can effectively hinder the decision-making in a given situation. Today’s Modern methods of data use, today enable us to establish an effective interaction between man and machine, thus significantly facilitating firefighters work in high-risk actions. Finding such solutions requires excellent knowledge of complex algorithms of dynamic risk management. This system both assesses and forecasts risk factors in real time and assists the commander by proposing optimal scenarios for a given action. These scenarios are based on models created from computer simulations and historical data.

Interesting aspects of the project:

One of the most important issues was finding the right algorithms for the following problems:

  • Estimation of the firefighter’s movement inside the building.
  • The model of the spread of fire indoors.

For our data scientist, the most interesting task was to create a version of the Kalman filter that uses data from gyroscopes and accelerometers, which improved the trajectory of a person.

Algorithm supporting the work of a brokerage company

Problem: forecasting trading strategies on currency markets.

Solutions: a system that will anticipate changes in stock market trend so that you can sell or buy shares beforehand.

In the project for the international hedge fund, the aim was to predict trading strategies by using statistical models on currency markets.
Initial tasks were focused on quantitative analysis, i.e. on the collection, aggregation and analysis of archival market data for transactions. In this situation, an important element was the generation of statistics and reports from big data datasets. Then we had to focus on creating appropriate algorithms for playing at high frequency with little delay. During the project, it was necessary to develop and analyse various statistical models and analyse historical market data using data mining techniques.

Interesting aspects of the project:

The best results were provided by the Kalman Filter, which implemented the so-called Markowitz’s portfolio theory.

Biuro Gdańsk
ul. Gen. de Gaulle'a 3A/2
80-261 Gdańsk

Warsaw office

  • Copy link
    Powered by Social Snap