BETTER FACEBOOK ADS RETARGETING PERFORMANCE
We created an algorithm that tests Facebook Ads retargeting campaigns based on different variables and analyzes past data to boost conversions and sales from Facebook while decreasing campaign costs.
Problem: Marketers needed a tool for creating effective Facebook Ad campaigns and optimizing their performance based on data, not just a hunch.
- The model uses historical data to identify the most relevant target group for the product in question.
- It finds the highest relevance score to increase the reach of the ad while reducing campaign costs.
- We included additional variables like ROAS and CPC, allowing the client to optimize the campaign even more.
- The algorithm showed users how to create the most effective campaign given the products on sale
The AI-based enhancements are designed to help stores sell more products, improve the effectiveness of their marketing campaigns, and avoid overspending. They also help lower customer churn, improve engagement, and, ultimately, increase revenues.
ALL-IN-ONE MARKETING DATA DASHBOARD
We used machine learning to assign attribution across marketing channels and allow marketers to connect data from various sources. This helped them better understand their audience and the customer journey from click to conversion. All in a comprehensive marketing data dashboard that collects real-time data and gives valuable insights into what’s driving campaign results and delivering ROI.
Problem: Data across marketing channels was too distributed. Teams needes a tool to compare insights in one place: to evaluate conversion while still accounting for the cost of each channel.
- First, we deployed a tool to monitor user behavior across client platforms.
- We tested it and tweaked its performance
- We built out the aggregation ruleset
- We developed the algorithms to assess attribution across marketing channels
- We built a visualization dashboard using R Shiny
The algorithm shows which actions impact sales the most, where clients can cut costs, and how a budget increase can improve performance, while highlighting new opportunities in Google Analytics.
BRAND LOGO RECOGNITION FOR REAL-TIME SOCIAL MEDIA MONITORING
We created a continuously trained machine learning algorithm that can recognize brand logos in images, along with a tool generating synthetic data necessary to train it. The system recognizes brand logos in any online image with 60% accuracy, making it the most effective algorithm of its kind.
Problem: Marketers needed a tool that could automate real-time online monitoring of brand logos in images posted on social media. Automated brand logo recognition helps brands detect additional branding opportunities, react to potential issues and crises much more accurately, and protect brand identity.
Step one: Generating synthetic data
- We built a tool that could use a logo image to create a synthetic dataset.
- The tool could segment images, perform depth-testing, and carry out surface estimation.
- The tool then applied the logos with the correct size and perspective, placing them in the appropriate segment of the image.
- It generated tens-of-thousands of images we could use to train the algorithm.
Step two: Training the algorithm
- We used deep learning and an array of other technologies to design and train the final algorithm.
- The new tool learned to detect logos thanks to the synthetic dataset.
Step three: Building a verification tool
- The client required a verification tool to validate when the algorithm correctly identifies a logo.
- The algorithm uses this second, more precise data set to improve its logo detection accuracy.
AUTOMATED INVOICE RECONCILIATION PLATFORM
We used advanced machine learning algorithms to create a tool that automated invoice classification, reduced manual work, and improved the accuracy of cost tracking by accounting departments.
Problem: Our client had to classify over 20,000 invoices manually, primarily to help with accounting and support with decision-making. Processing the invoices involved a lot of manual work that could be saved by a smart, automated solution.
Step one: Removing manual work
- The client had to process 80,000 invoices in total, out of which over 20,000 were done manually.
- The new tool means 83% of invoices are now processed automatically.
- This frees resources to focus on value-adding accounting tasks.
Step two: Simplifying the process
- The new setup has put the power back in the decision-makers’ hands, simplifying a time-consuming business process through automation.
Step three: Leveraging the new system
- By combining machine learning techniques with accounting expertise, we built a unique system designed to reach the best results.
AUTOMATED TV METADATA EXTRACTION
We created a tool that automated metadata extraction like directors, actors, year or genre from TV show descriptions. We used a hybrid extraction method based on regular expressions, linguistic rules, and statistical language models – using dependency analysis for the decomposition of templates.
Problem: Our client needed a way to detect metadata in program descriptions and move it to a new database. The process was laborious and needed to be automated to save time and manual effort.
Step one: Cleaning the metadata
- We started by improving the metadata for training, finding external sources to fill and validate existing metadata extraction.
Step two: Creating a pipeline for metadata extraction
- We built and trained algorithms to extract specific information from descriptions. We achieved a 1-2% error rate in the extracted metadata (depending on the field) with 2.5 metadata items extracted on average per description.
Step three: Template analysis
- As a final step, we tackled the creation of the descriptions themselves. We had to turn the description into an abstract sentence structure to be able to reuse it within the template itself.
AUTOMATED TAX RETURN APP
We used the latest machine learning technologies and neural networks to build an intuitive, comprehensive application for consumers and businesses that helps automate tax returns and save time in large organizations. The automation guarantees a streamlined tax return process in any company, extracting data from a tax card in just 13 seconds, with 93% efficiency.
Problem: There was no way to settle tax returns easily and quickly based on the received tax cards. The process was manual, taking up hours instead of the seconds it could have been taking with smart AI technology.
1. Profile selection
To simplify and shorten the application process, the system only selects relevant profiles. At first, the user answers simple questions. Based on the information, the application generates a form that’s tailored to the customer.
2. Tax card processing
The tax card scanning feature using a mobile device saves several minutes of processing time. In just 13 seconds, the system extracts data with around 93% efficiency (for all fields) and inserts it into the appropriate fields of the form. This speeds up the tax return process, while the intuitive user interface makes it easy to follow.
3. Tax calculator
The app also includes a tax calculator, which converts the user’s refund amount or the necessary additional payment. Its value is recognized by the client shortly after the first data has been entered, and it is updated as the next steps are completed.
The app is available in two language versions: English and Polish. Switching between the two languages is seamless and doesn’t require reloading the page.
5. Support for business clients
Additional solutions supporting the work of business clients save time in large organizations. Based on the uploaded file with tax cards, the system creates a list with new tax returns. It guides the user through each application, one after another.
Business clients also receive a tailored offer, defined by the application packages. The user can select the currency for their tax applications using two different payment platforms.
ONLINE ADVERTISING MONITORING FOR MEDIA HOUSES
For one of our clients, we created a system that monitors online advertising, tracks page views and clicks, and compares them to the data provided by external marketing agencies.
Problem: The client wanted to be able to easily verify the number of online ad views and clicks from ad servers with their own results. Ad servers use their own statistics, which often results in inconsistent and inaccurate data, affecting the decision-making process in marketing departments.
We used data mining to equip the system with useful features. Our engine can receive large amounts of data, then aggregate and process it. It also generates reports and shows data in real time. All this is accompanied by Google and Onet certifications concerning traffic requirements.
- 6,000 queries per second – for increased traffic volumes.
- Testing multiple technologies to decide which combination will bring the best results in the shortest time. Tested technologies: Twisted, Gevent, Redis, Consul, Docker Swarm, Docker Compose, ZMQ, Celery, RabbitMQ.
- We achieved the best results by using a combination of several technologies. In this case, we used Python, R and C linked to Redism, PostgreSQL, Nginx, Docker, and many other technologies, resulting in a unique quality.
- Agile approach. The project started with defining the tests to see whether the selected technologies met the client’s requirements. The goal was to create an efficient system that would handle thousands of connections per second and analyze them.
After defining the tests, we conducted trials of each technology to find the one that met all the requirements and saved our team a considerable amount of development work.
PRECISE USER TARGETING FOR FACEBOOK ADS
We created an algorithm that targets users on Facebook based on their psychological features and suggests how to create a Facebook ad that gets clicked. The system allows tracking ads/users/pixels and generating creative briefs for graphic designers based on keywords and the features of the target group.
Problem: Our client was experiencing low CTRs for their visual content on Facebook. One of the reasons was that the content wasn’t precisely targeted to the specific audience groups, which caused wasted ad money and ineffective campaigns.
Using a short description of the campaign, primary demographic data (gender, age, city size, etc.) and keywords describing the target group, the system divides the targeted group into sub-groups with a specific psychological profile and aesthetic preferences.
The generated creative brief helps to create a visual ad that matches the group and guarantees increased conversion.
The algorithm was created by psychologists, developers, and the data science team and it took six months to develop. Because of cultural differences, the system is currently focused exclusively on the Polish market.
BUSINESS INTELLIGENCE SOFTWARE ANALYZING STUDENT BEHAVIOR AND COURSE PROFITABILITY
For a European college in Cracow, we created an algorithm analyzing the profitability of degree courses. The system is an integral part of the university’s business intelligence, allowing the school to verify the quality of education based on the graduates’ work achievements and enables the creation of business plans based on reliable forecasts. All while reducing the cost of manual analysis of large data volumes.
Problem: To determine whether a course was profitable, the school’s employees had to analyze large data sets, which was time-consuming, inefficient, and prone to human error. The data was diffused and included in numerous SQL databases, but also raw sources like text files, pdf files, e-documents, and spreadsheets.
The goal of the project was to create a tool for integrating large and dispersed data sets. The time needed for the digital analysis was cut down to just a fraction of a second, instead of many weeks of the administration staff’s work.
- Our first step was to integrate the data sources into a data warehouse.
- Then we created an interface for data mining in the system to quickly generate indicators of student behavior on the market.
One of the most important goals of the project was to make the system intuitive and easy to use for all university employees. As a result, it gives more accurate analysis results (in real time), reducing the investment in time and money.
DIABETESLAB PROJECT – MANAGING DIABETES WITH AI
We created a mobile app that uses personal data to simplify the self-management of diabetes: collecting information from smart devices and medical equipment to present a holistic view of how to optimize your routine each day.
Problem: It’s estimated that 415 million people in the world have diabetes. The predicted increase in this number is estimated at approximately 642 million by 2040. One person (five million a year) dies every six seconds because of diabetes – more than HIV, tuberculosis, and malaria combined.
One of the 415 million patients is Przemysław Majewski, CEO of DLabs, who wanted to help other people with diabetes manage their illness, plan their diet, and create training plans.
The technology created by the DLabs team uses artificial intelligence to personalize diabetic therapy and help patients in their day-to-day management of this chronic disease. This solution will benefit people who have type 1 diabetes, because it requires intensive treatment and exceptional care, in particular during sports activities.
In the next step, we intend to adopt the solution to the needs of patients with type 2 diabetes and consider other groups of customers.
The goal of the project is to retrain the market, focusing on investments to prevent complications resulting from diabetes, which will avoid costly treatment and the risk of severe consequences.
Suguard is intended to help patients achieve stable blood glucose levels. In particular, it can help those patients who lead an active lifestyle, to improve their health and well-being, and to reduce the cost of treatment.
A handful of clients use Suguard daily. It offers 90% efficiency in predicting actions to stabilize blood glucose levels, decreasing the risk of complications while helping users choose the right foods, activities, and medicines.
DATA-BASED DECISION SUPPORT SYSTEM FOR THE FIRE BRIGADE
The DLabs Chief Data Science Officer participated in the creation of modern decision-supporting tools and recommendations for the commanders of the State Fire Service during rescue and firefighting operations. At the heart of the system are activity detection algorithms and counting navigation using the IMU (inertial measurement unit).
Problem: Making strategic decisions in crises under time pressure is extremely difficult. The fire brigade needed a system that improves the effectiveness of rescue operations by supporting decision-makers with real-time data analysis.
The goal of this project was to improve the safety of firefighters and to optimize the results of rescue operations. The core of the project was to design a modern risk management system that would help the commander in action.
The amount of information, time pressure, and unimaginable responsibility can effectively hinder decision-making in a given situation. Data enables us to establish a productive interaction between man and machine, making the work of firefighters much easier in high-risk actions.
Finding solutions requires excellent knowledge of the complex algorithms of dynamic risk management. The system both assesses and forecasts risk factors in real time and assists the commander by recommending the best scenarios for a given situation. The scenarios are based on models created from computer simulations and historical data.
One of the most critical issues was finding the right algorithms for the following problems:
- Estimating a firefighter’s movement inside the building.
- Creating an indoor fire spread model.
For our data scientist, the most interesting task was to create a version of the Kalman filter that uses data from gyroscopes and accelerometers, which improved the trajectory of firefighters.
AUTOMATED STOCK MARKET TREND PREDICTION
We created a system that predicted changes in stock market trends for an international hedge fund.
Problem: The client wanted to be able to predict stock market trends and create best trading strategies on currency markets.
Step one: Quantitative analysis
We collected, aggregated, and analyzed historical market data for transactions. An essential element was generating statistics and reports from big data sets.
Step two: Creating algorithms
During the project, it was necessary to build and analyze various statistical models and analyze historical market data using data mining techniques.