Suguard Report – Diabetes and Sports

DiabetesLab is creating a smart mobile application for people with type 1 diabetes under the name Suguard. Our mission is to increase the awareness of people with diabetes and to facilitate their disease management every day thanks to the self-learning algorithms of artificial intelligence. The presented report is the summary of a questionnaire survey carried out on people living with diabetes. The aim was to understand what kind of problems, fears and obstacles people living with diabetes face in the context of sports: and how existing mobile applications can help them with that.

 

Suguard

 

The study that we carried out confirmed the thesis, that there is no app dedicated to diabetics on the market, which would fully help in managing the disease in the context of sports. The survey showed that the available apps lack functions, including the following, which would help people with diabetes to effectively manage the disease in the context of sports

  • information on how a certain physical exercise affected
    their glucose level
    , what were the sugar trends, how to
    stabilize the fluctuations, how to adjust the exercise to the
    active insulin dose
    in the body,
  • access to knowledge about how other people with diabetes
    handle physical activity
  • a calculator, which, based on recently consumed meals,
    glucose level results and the time planned for physical
    exercise, would suggest insulin doses and the amount of
    carbohydrates to consume.

Participants, who use different types of mobile apps, indicated that the functions, which they use the most, are:

  • glucose level analysis, in order to prevent hypoglycemia and hyperglycemia, mainly after physical exercise,
  • blood sugar level alerts (i.e. when it’s too low or too high).

The analysis is made of two parts:

  • The first part consists of the analysis of task descriptions. Based on their contents, we try to group them into several types, eg: we classify the task described as ‘model hyperparameters optimization’ into the data science category.
  • The second part focuses on task times, ie. differences in work patterns every day of the week or months of the year – for example, ‘which day of the week do we finish work the earliest?’

There were several things we wanted to explore and confirm:

  • Will the company’s shift from development toward data science be visible?
  • Do the projects require more developers than data scientists early on, then switch as the project progresses?
  • Are the overall proportions of development/management stable?
  • What are the typical working hours?
  • What is the most usual time for a lunch break?
  • Do we finish working earlier on Fridays?
  • Which month of the year do we work the most?
  • How has the total amount of work done increased with the company’s growth?
  • What are the most popular dates for vacation?

The report is based on data exported from the time-tracking system used by DLabs.AI and covers the period from June 2015 to May 2019 – the full four years. There were approximately 65.000 records describing specific tasks to be done.

In this article, we present a new algorithm for creating simplicial Vietoris-Rips complexes that are easily parallelizable using computation models like MapReduce and Apache Spark. The algorithm does not involve any computation in homology spaces.

Keywords: data analysis, topology, simplicial complex

The article of Silva and Grist [2] showed that the algebraic topology was a very practical tool. In this paper, its authors analyze the coverage of an area with a network of sensors. This network is interpreted as a simplicial complex. Its homology type is then computed and exploited. If this complex is homotopy equivalent to a point, the coverage is complete. Therefore, the authors translated a technical problem into a mathematical problem.

In the article [3] Grist searches for topology structures in big data sets. Again, this results in building simplicial complexes and analyzing them.

The above-mentioned works inspire us to ask the question of how to build simplicial complexes efficiently and how to analyze them. In this article, w propose a novel algorithm to build the simplicial complex. This algorithm has the unique property that it can be implemented within massive parallel computational models like MapReduce and Apache Spark.

Hereafter we assume that the data set given is a finite set of points P := {x1, . . . , xn} ⊂ Rd. This data can come up as a result of some physical experiments, psychological tests, etc., and can generally be high-dimensional. We have no insight into how the data actually look and how it is embedded, but it would be enough to know their “shape” to say something about how it is arranged in the original space.