Logo
  • Home
  • About
  • Blog
  • Portfolio
  • Areas of Interest
Get in touch

Capstone project: Bellabeat Case Study

Capstone project: Bellabeat Case Study

My first Data Analytics project. The first one just needs to exist; do not seek perfection!

‣
The Assignment (Challenge):

My Why:

To go through the whole cycle of a Data analytics project, which will allow me to:

  • To apply, consolidate, and fill the gaps in the knowledge I’ve gained during the study of the Google Data Analytics Professional Certificate | Coursera
  • To create a first Case study for my Portfolio and upload it to my website: www.lukasdusek.com
  • To complete the first run and further refine the Data Science Project template that I created in Notion while studying this course.
  • To have fun

"To be a data analyst is not only to be a scientist but that it's also to be an artist. The entire world is your canvas. ~ Rishie"

1st Phase: ASK

Goals of the ASK phase:

It’s impossible to solve a problem if you don’t know what it is. These are some things to consider:

  • Define the problem you’re trying to solve
  • Make sure you fully understand the stakeholder’s expectations
  • Focus on the actual problem and avoid any distractions
  • Collaborate with stakeholders and keep an open line of communication
  • Take a step back and see the whole situation in context

Defining the project:

What is the primary question?

  • How can we unlock new growth opportunities for the company? By analyzing smart device data to gain insight into how consumers are using their smart devices, particularly with one of Bellabeat’s products. These insights will help guide the marketing strategy for the company.:
    1. What are some trends in smart device usage?
    2. How could these trends apply to Bellabeat customers?
    3. How could these trends help influence Bellabeat’s marketing strategy?

Who are the primary and secondary stakeholders?

  • Primary:
    • Urška Sršen: Bellabeat’s co-founder and Chief Creative Officer
    • Sando Mur: Mathematician and Bellabeat’s cofounder; a key member of the Bellabeat executive team
  • Secondary:
    • Bellabeat marketing analytics team

What type of problem I am solving?

  • The objective is to identify themes and patterns in users' behaviour, using them as a basis to provide recommendations for marketing campaigns, products, and overall business strategies.
    • Identifying themes
    • Finding patterns
    • Making Predictions

How will I measure the success of the project?

Case Study:

  • At least one prediction or recommendation is derived from the analysis, which can be promptly implemented into the marketing strategy, followed by a test run of the campaign on a smaller scale.

Personal:

  • The project is finished, published, and posted on Linked In.

What have I learned during this phase:

As I initially started with the assignment, I experienced a sense of overwhelm. However, as soon as I started following the Project Template I had created, the task became more manageable and broken down into smaller, approachable segments. Step by step.

Just as the saying goes:

"How do you eat an elephant? One bite at a time!"

I am grateful for the time and effort I invested in creating this template; tackling the assignment would have been significantly more challenging without it.

2nd Phase: PREPARE

Goals of the PREPARE phase:

To decide what data needs to be collected to answer project questions and how to organize them so that they are useful.

  • What metrics to measure
  • Locating data in the database
  • Creating security measures to protect that data

Questions to ask yourself in this step:

1. What do I need to figure out how to solve this problem?

2. What research do I need to do?

Collecting and Organising data:

Data collection and organization were minimal in this case, given that a dataset was provided with the case study assignment.

About dataset Fitbit Fitness Tracker Data:

This dataset was generated by respondents to a distributed survey via Amazon Mechanical Turk between 03.12.2016 and 05.12.2016. Thirty eligible Fitbit users consented to the submission of personal tracker data, including minute-level output for physical activity, heart rate, and sleep monitoring. Individual reports can be parsed by export session ID (column A) or timestamp (column B). Variation between output represents the use of different types of Fitbit trackers and individual tracking behaviours/preferences.

Securing data:

I skipped this step as the dataset is public.

Inspecting data:

The main focus of this phase for this concrete project was to inspect the provided data. This gave me an overall idea of what kind of data I'm working with.

My inspection consisted of:

  • Content of datasets
  • Source
  • Data format
  • Data structure
  • Different types of Bias
  • ROCCC
  • Data ethics and privacy & Open Data
  • Anonymization
  • Tool Used

The results are shown in the table below (scroll from left to right to see the whole table)

Data
Source
Storage
Data Format
Data Structure
Sampling bias?
Observer bias?
Interpretation bias?
Confirmation bias?
Bias Free?
ROCCC?
Ethics & Privacy
Anonymization
Tool Used:
dailyActivity - Quantitative (There are data about: time, intesity, distance calories. Now data about quality nor type of activity) dailyCalories- Quantitative (Number of Calories) dailyIntensities - Quantitative (Intensities, Distance) dailySteps - Quantitative (Total Steps) sleepDay - Quantitative (Total sleep time, total bed time, number of sleep session in day) weightLogInfo - Quantitative (Weight KG/Pounds, Fat (just in two entries - unusable), BMI; - lacking watter, muscle mass, bones) hourlyCalories - Quantitative (Number of calories) hourlyIntensities - Quantitative (Total, Average Intensities) hourlySteps - Quantitative (Number of steps minuteCaloriesNarrow - Quantitative (Number of calories) minuteCaloriesWide - Quantitative (Number of calories) minuteIntensitiesNarrow - Quantitative (Intensity per minute) minuteIntensitiesWide - Quantitative (Intensity per minute) minuteMETsNarrow - Quantitative (Metabolic equivalent of task - one MET energy used while sitting quietly - is used to indicate intensity of of the activity) minuteSleep - Quantitative (Sleep in minutes, we lack of the sleep “deepness” if it was interrupted, etc..) minuteStepsNarrow - Quantitative (Steps per minute) minuteStepsWide - Quantitative (Steps per minute) heartrate_seconds - Quantitative (Heart rate)
FitBit Fitness Tracker Data | Kaggle CC0: Public Domain
Personal Google Drive
- Secondary Data - Continuous + Discrete Data - Quantitative Data - Nominal
Structured data .csv/.xlsx
Yes: The source do not provide detailed information How the sample was created and thus I’ll work with assumption that: dataset is sampling biased
No: The data is gathered by trackers no Observation bias
No: The data is gathered by trackers no Interpretation bias
No: The data is gathered by trackers no Confirmation bias
No: For lack of detailed information about sampling we are working with assumption that samplig bias is present
Reliable = N/A Missing detailed information about the way the sample was created Original = Yes Comprehensive = N/A (for the first portfolio case study yes, In real world I would look for additional data sources) Current = No (03.12.2016-05.12.2016) Cited = No
• Ownership: Publicly shared anonymized • Transaction transparency: - • Consent: - • Currency: - • Privacy: - • Openness: Yes
Yes
Amazon Mechanical Turk between 03.12.2016-05.12.2016

What have I learned during this phase:

An initial inspection of the dataset provides an overview and can early indicate whether the dataset is suitable for analysis. Given the out-of-date and uncertainty surrounding it in this particular case, in a real-life scenario, I would consider seeking an alternative dataset. However, since the primary objective of this case study is its completion, I will proceed to the next step.

3rd Phase: PROCESS

Goals of the PREPARE phase:

A strong analysis depends on the integrity of the data and clean data is the best data. In this phase, the main focus is to clean up the data to get rid of any possible errors, inaccuracies, or inconsistencies. Also at this stage is crucial to get completely familiar with the dataset and identify its potential and limitations.

  • Checklists are most helpful to be thorough.
  • Keeping track of the changes with the Changelog
‣

Snippet of the Change-log

3.1. Data integrity & alignment:

Data Constraint
Initial State
Final State Check
Definition
Examples
Data type/format
⛔ Some of the tables were not in the same format as the rest
✅
Values must be of a certain type: date, number, percentage, Boolean, etc.
If the data type is a date, a single number like 30 would fail the constraint and be invalid
Data range
✅
✅
Values must fall between predefined maximum and minimum values
If the data range is 10-20, a value of 30 would fail the constraint and be invalid
Mandatory
✅
✅
Values can’t be left blank or empty
If age is mandatory, that value must be filled in
Unique
✅
✅
Values can’t have a duplicate
Two people can’t have the same mobile phone number within the same service area
Regular expression (regex) patterns
✅
✅
Values must match a prescribed pattern
A phone number must match ###-###-#### (no other characters allowed)
Cross-field validation
✅
✅
Certain conditions for multiple fields must be satisfied
Values are percentages and values from multiple fields must add up to 100%
Primary-key
⏸️
⏸️
(Databases only) value must be unique per column
A database table can’t have two rows with the same primary key value. A primary key is an identifier in a database that references a column in which each value is unique. More information about primary and foreign keys is provided later in the program.
Set-membership
⏸️
⏸️
(Databases only) values for a column must come from a set of discrete values
Value for a column must be set to Yes, No, or Not Applicable
Foreign-key
⏸️
⏸️
(Databases only) values for a column must be unique values coming from a column in another table
In a U.S. taxpayer database, the State column must be a valid state or territory with the set of acceptable values defined in a separate States table
Accuracy
✅
✅
The degree to which the data conforms to the actual entity being measured or described
If values for zip codes are validated by street location, the accuracy of the data goes up.
Completeness
✅/⛔ weightLogInfo - only 8 respondends sleepDay - only 24 respondents DailyActivity - many of entries have 0 values and 1440 minutes of sedentary minutes. This either mean that in that days they were sitting whole day 😃 or the data was not measured
✅/⛔ weightLogInfo - only 8 respondents sleepDay - only 24 respondents Will Not use Weight
The degree to which the data contains all desired components or measures
If data for personal profiles required hair and eye color, and both are collected, the data is complete.
Consistency
✅
✅
The degree to which the data is repeatable from different points of entry or collection
If a customer has the same address in the sales and repair databases, the data is consistent.
Question
Check State
Are the data aligned with the Objective?
✅ - Yes we can draw some conclusions
Are there some other valuable Variables?
⛔ For this sample No
Are there some missing Variables?
✅/⛔ For this case study there is enough, but for more thorough analysis we would benefit from additional variables
Are there some alternative Variables
⛔ for this sample no
Can/Should be the objective expanded/modified based on current data?
✅ The objective is very broad for the available data

3.2. Data Insufficiency & Errors Decision tree

Insufficiencies

  • Comes from only one source = Yes, it is from one source
  • Continuously updates and is incomplete = No, data is static
  • Is outdated = Yes, data is from 2016 (People’s behaviour probably changed during the pandemic)
  • Is geographically limited = Yes/No - We do not have the information about the regions, that the sample is draw from

To deal with insufficient data, we can:

  • Identify trends within the available data
  • Wait for more data if time allows
  • Discuss with stakeholders and adjust your objective
  • Search for a new dataset
DATA ERRORS
If Yes
If No
Can you fix or request a corrected dataset?
Perform the analysis after the data has been corrected
👇🏻
Do you have enough data to omit the wrong data?
Perform the analysis without the wrong data
👇🏻
NOT ENOUGH DATA
Can you proxy the data?
Perform the analysis with the proxied data
👇🏻
Can you collect more data?
Perform the analysis after data collection
Modify the business objective if possible

3.3. Calculating Sample Size

‼️
As we do not have information about the location from which the sample was drawn from, for this exercise I will assume that is for women from European Union ages between 15-64 - The population size is = 142572761 (Source: https://data.worldbank.org/indicator/SP.POP.1564.FE.IN?locations=EU) - For this population, we would need a sample size of 96 respondents to achieve a 95% confidence level with a 5% percent Margin of error. And 91 for a 90% confidence level with a 5% margin error - To calculate margin of Error I will choose a confidence level at 90%
POPULATION
Confidence Level
All_Other(33) = 90% sleepDay(24) = 90% weightLogInfo(8) = 90%
Margin of Error
All_Other(33) =14,37% sleepDay(24) =16,85% weightLogInfo(8) = 29,17%
Sample Size
All_Other = 33 sleepDay = 24 weightLogInfo = 8

3.4. Cleanup Checklist: Check for & Clean Dirty Data

Steps/Type
Status
Description
Possible Causes
Potential harm to businesses
Back up your data prior to data cleaning
Done

It is always good to be proactive and create your data backup before you start your data clean-up. If your program crashes, or if your changes cause a problem in your dataset, you can always go back to the saved version and restore it. The simple procedure of backing up your data can save you hours of work-- and most importantly, a headache.

Document errors
Done

Documenting your errors can be a big time saver, as it helps you avoid those errors in the future by showing you how you resolved them. For example, you might find an error in a formula in your spreadsheet. You discover that some of the dates in one of your columns haven’t been formatted correctly. If you make a note of this fix, you can reference it the next time your formula is broken, and get a head start on troubleshooting. Documenting your errors also helps you keep track of changes in your work, so that you can backtrack if a fix didn’t work.

Keep track of business objectives
Done

When you are cleaning data, you might make new and interesting discoveries about your dataset-- but you don’t want those discoveries to distract you from the task at hand. For example, if you were working with weather data to find the average number of rainy days in your city, you might notice some interesting patterns about snowfall, too. That is really interesting, but it isn’t related to the question you are trying to answer right now. Being curious is great! But try not to let it distract you from the task at hand.

Account for data cleaning in your deadlines/process
Done

All good things take time, and that includes data cleaning. It is important to keep that in mind when going through your process and looking at your deadlines. When you set aside time for data cleaning, it helps you get a more accurate estimate for ETAs for stakeholders, and can help you know when to request an adjusted ETA.

Analyze the system prior to data cleaning
Done

If we want to clean our data and avoid future errors, we need to understand the root cause of your dirty data. Imagine you are an auto mechanic. You would find the cause of the problem before you started fixing the car, right? The same goes for data. First, you figure out where the errors come from. Maybe it is from a data entry error, not setting up a spell check, lack of formats, or from duplicates. Then, once you understand where bad data comes from, you can control it and keep your data clean.

Fix the Source of the error
Done

Fixing the error itself is important. But if that error is actually part of a bigger problem, you need to find the source of the issue. Otherwise, you will have to keep fixing that same error over and over again. For example, imagine you have a team spreadsheet that tracks everyone’s progress. The table keeps breaking because different people are entering different values. You can keep fixing all of these problems one by one, or you can set up your table to streamline data entry so everyone is on the same page. Addressing the source of the errors in your data will save you a lot of time in the long run.

Check the Size of the data set
Done

Check the Number of categories or labels
Done

Check the for The different data types
Done

Look for all of the Relevant data
Done

It is important to think about all of the relevant data when you are cleaning. This helps make sure you understand the whole story the data is telling, and that you are paying attention to all possible errors. For example, if you are working with data about bird migration patterns from different sources, but you only clean one source, you might not realize that some of the data is being repeated. This will cause problems in your analysis later on. If you want to avoid common errors like duplicates, each field of your data requires equal attention.

Check for Incomplete data
Done

Any data that is missing important fields

Improper data collection or incorrect data entry

Decreased productivity, inaccurate insights, or inability to complete essential services

Check for Missing values
Done

Missing values in your dataset can create errors and give you inaccurate conclusions. For example, if you were trying to get the total number of sales from the last three months, but a week of transactions were missing, your calculations would be inaccurate. As a best practice, try to keep your data as clean as possible by maintaining completeness and consistency.

Check for Duplicate data
Done

Any data record that shows up more than once

Manual data entry, batch data imports, or data migration

Skewed metrics or analyses, inflated or inaccurate counts or predictions, or confusion during data retrieval

Check for Outdated data
Done

Any data that is old which should be replaced with newer and more accurate information

People changing roles or companies, or software and systems becoming obsolete

Inaccurate insights, decision-making, and analytics

Check for Inconsistent data
Done

Any data that uses different formats to represent the same thing

Data stored incorrectly or errors inserted during data transfer

Contradictory data points leading to confusion or inability to classify or segment customers

Check for Incorrect/inaccurate data
Done

Any data that is complete but inaccurate

Human error inserted during data input, fake information, or mock data

Inaccurate insights or decision-making based on bad information resulting in revenue loss

Check for Spelling errors
Done

Misspellings can be as simple as typing or input errors. Most of the time the wrong spelling or common grammatical errors can be detected, but it gets harder with things like names or addresses. For example, if you are working with a spreadsheet table of customer data, you might come across a customer named “John” whose name has been input incorrectly as “Jon” in some places. The spreadsheet’s spellcheck probably won’t flag this, so if you don’t double-check for spelling errors and catch this, your analysis will have mistakes in it.

Check for Misfielded values
Done

A misfielded value happens when the values are entered into the wrong field. These values might still be formatted correctly, which makes them harder to catch if you aren’t careful. For example, you might have a dataset with columns for cities and countries. These are the same type of data, so they are easy to mix up. But if you were trying to find all of the instances of Spain in the country column, and Spain had mistakenly been entered into the city column, you would miss key data points. Making sure your data has been entered correctly is key to accurate, complete analysis. 

3.5. Verification Checklist: Comparing the original unclean data set with the clean one.

Measure twice, cut ones.
Verification Checklist
Status
Sources of errors: Did you use the right tools and functions to find the source of the errors in your dataset?
✅
Null data: Did you search for NULLs using conditional formatting and filters?
✅
Misspelled words: Did you locate all misspellings?
✅
Mistyped numbers: Did you double-check that your numeric data has been entered correctly?
✅
Extra spaces and characters: Did you remove any extra spaces or characters?
✅
Duplicates: Did you remove duplicates
✅
Mismatched data types: Did you check that numeric, date, and string data are typecast correctly?
✅
Messy (inconsistent) strings: Did you make sure that all of your strings are consistent and meaningful?
✅
Messy (inconsistent) date formats: Did you format the dates consistently throughout your dataset?
✅
Misleading variable labels (columns): Did you name your columns meaningfully?
✅
Truncated data: Did you check for truncated or missing data that needs correction?
✅
Business Logic: Did you check that the data makes sense given your knowledge of the business?
✅

3.6. Review the goal of the project

Once you have finished these data-cleaning tasks, it is a good idea to review the goal of your project and confirm that your data is still aligned with that goal. This is a continuous process that you will do throughout your project-- but here are three steps you can keep in mind while thinking about this:

Confirm the business problem
Confirm the goal of the project
Verify that data can solve the problem and is aligned with the goal
image

What have I learned during this phase:

The biggest finding during examining and cleaning the data set is that plenty of entries and data are missing. Many people do not track their daily activities consistently. I'll explore it more in the analysis and the following conclusion.

4th & 5th Phase: ANALYZE & SHARE

Goals of the ANALYZE & SHARE phase:

The focus is on thinking analytically about the data. At this stage, we might sort and format data to make it easier to:

  • Perform calculations
  • Combine data from multiple sources
  • Create tables with the results

Questions to ask yourself in this step:

  1. What story is my data telling me?
  2. How will my data help me solve this problem?
  3. Who needs my company’s product or service? What type of person is most likely to use it?

Everyone shares their results differently so we need to be sure to summarize our results with clear and enticing visuals of our analysis using data via tools like graphs or dashboards. It is our chance to show the stakeholders we have solved their problems and how we got there. Sharing will help the team:

  • Make better decisions
  • Make more informed decisions
  • Lead to better outcomes
  • Successfully communicate our findings

Questions to ask yourself in this step:

  1. How can I make 'what I present' to the stakeholders engaging and easy to understand?
  2. What would help me understand this if I were the listener?

I have decided to share three findings from my analysis: Tracking across the users; Relationships between tracker variables; and Hourly average intensity for the sample.

Tracking across the user

Here, I have looked closely into the findings from the previous step, revealing a significant absence of entries/data. The discrepancies in tracking daily activities are significant, with only 25% of respondents consistently tracking and 46.9% classified as 'High trackers,' able to monitor 21-30 days out of 31. Moreover, when examining sleep tracking, the disparities are even more pronounced; merely 9.4% of respondents tracked daily, with only 28.1% identified as 'High trackers.' Notably, 28.1% of respondents did not track their sleep at all.

Three questions that came to my mind:

  • How to help the users to be more consistent?
  • How to make tracking as simple and as frictionless as possible?
  • Why do people track their sleep much less than their daily activities?
image
image
Not Tracked (=0)
Low Tracker (>=1, <=10)
Moderate Tracker (>=11, <=20)
High Tracker (>=21, <=30)
All-time Tracker (=31)
Days Tracked
0
0
8
15
9
Days Sleep Tracking
9
8
3
9
3
USER (ID)
Entries for Month
Entries % Month
Asleep Entries Month
Asleep % Month
2026352035
31
100.00%
28
100.00%
4558609924
31
100.00%
5
100.00%
7086361926
29
100.00%
24
100.00%
6775888955
16
100.00%
0
90.32%
1644430081
30
100.00%
0
90.32%
2347167796
17
93.55%
14
90.32%
2022484408
31
96.77%
0
87.10%
2320127002
31
90.32%
1
83.87%
4388161847
30
96.77%
23
80.65%
4702921684
30
90.32%
27
80.65%
6962181067
31
93.55%
31
77.42%
8877689391
27
96.77%
0
74.19%
8053475328
30
74.19%
3
61.29%
6117666160
23
61.29%
19
48.39%
8583815059
25
54.84%
0
45.16%
4020332650
15
48.39%
8
25.81%
8792009665
19
100.00%
15
16.13%
2873212765
31
54.84%
0
16.13%
7007744171
24
96.77%
2
12.90%
1503960366
30
96.77%
25
9.68%
8378563200
31
58.06%
31
9.68%
5553957443
31
77.42%
31
6.45%
1624580081
30
100.00%
4
3.23%
8253242879
18
100.00%
0
0.00%
6290855005
24
100.00%
0
0.00%
4445114986
31
96.77%
28
0.00%
1927972279
17
87.10%
5
0.00%
5577150313
28
80.65%
26
0.00%
4319703577
28
77.42%
25
0.00%
3372868164
20
64.52%
0
0.00%
3977333714
29
58.06%
28
0.00%
1844505072
18
51.61%
3
0.00%
—————
———
————
———
————
Mean
26.125
84.27%
12.6875
40.93%
Median
29
93.55%
6.5
20.97%
image

Relationship analysis

Next, I was curious if any metrics shoved a relationship with sleep. Below we will look at two examples.

‣
Source Data Table for Relationship Analysis

1. Relationship between sedentary minutes and minutes asleep.

The analysis revealed a negative correlation between time spent sitting and time spent asleep. Inconsistencies in tracking may influence the accuracy of the results, and to draw a more definitive conclusion, analysing a better sample would be necessary. Nonetheless, this finding remains interesting.

image

2. Relationship between number of steps and minutes asleep. On the other hand, I found no relationship between number of steps and time spent asleep.

image

Hourly Average Intensity

In the following graph, we observe the level of activity throughout the day. The highest activity levels are typically recorded between 17:00 and 19:00, with two noticeable dips occurring between 10:00-12:00 and 14:00-16:00.

Hour
Average Intensity
00:00
0.04
01:00
0.02
02:00
0.02
03:00
0.01
04:00
0.01
05:00
0.08
06:00
0.13
07:00
0.18
08:00
0.24
09:00
0.26
10:00
0.29
11:00
0.28
12:00
0.33
13:00
0.31
14:00
0.31
15:00
0.26
16:00
0.30
17:00
0.36
18:00
0.37
19:00
0.36
20:00
0.24
21:00
0.20
22:00
0.15
23:00
0.08
image

6th Phase: ACT

Step 6: Act Now it’s time to act on your data. You will take everything you have learned from your data analysis and put it to use. This could mean providing your stakeholders with recommendations based on your findings so they can make data-driven decisions. Questions to ask yourself in this step: 1. How can I use the feedback I received during the share phase (step 5) to actually meet the stakeholder’s needs and expectations?

Introductory paragraph Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur quis porttitor diam. Sed nec arcu non urna pretium congue vel sit amet quam. Phasellus quis nibh nunc.

The problem

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus a pretium leo. Praesent eu sem nulla. In nunc arcu, rutrum eu malesuada sed, lacinia nec ipsum. Maecenas quis tincidunt sem. Quisque vitae nisl vel purus laoreet efficitur quis ut arcu. Nulla sit amet risus hendrerit, tristique augue eu, dapibus risus. Curabitur ullamcorper ligula lacus, sit amet pharetra neque interdum in.

image

Ut ultricies tortor massa, id luctus nulla tincidunt a. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec mattis mauris vel velit ullamcorper, eu ullamcorper nisi sodales. Nunc vel dui fringilla, fermentum diam in, scelerisque lacus. Curabitur in posuere lacus.

image
image
image

The Solution

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus a pretium leo. Praesent eu sem nulla. In nunc arcu, rutrum eu malesuada sed, lacinia nec ipsum. Maecenas quis tincidunt sem. Quisque vitae nisl vel purus laoreet efficitur quis ut arcu. Nulla sit amet risus hendrerit, tristique augue eu, dapibus risus. Curabitur ullamcorper ligula lacus, sit amet pharetra neque interdum in.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Curabitur quis porttitor diam. Sed nec arcu non urna pretium congue vel sit amet quam. Phasellus quis nibh nunc.

Ut ultricies tortor massa, id luctus nulla tincidunt a. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec mattis mauris vel velit ullamcorper, eu ullamcorper nisi sodales. Nunc vel dui fringilla, fermentum diam in, scelerisque lacus. Curabitur in posuere lacus.

image
image
image

Conlusion

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus a pretium leo. Praesent eu sem nulla. In nunc arcu, rutrum eu malesuada sed, lacinia nec ipsum. Maecenas quis tincidunt sem. Quisque vitae nisl vel purus laoreet efficitur quis ut arcu. Nulla sit amet risus hendrerit, tristique augue eu, dapibus risus. Curabitur ullamcorper ligula lacus, sit amet pharetra neque interdum in.

Ut ultricies tortor massa, id luctus nulla tincidunt a. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec mattis mauris vel velit ullamcorper, eu ullamcorper nisi sodales. Nunc vel dui fringilla, fermentum diam in, scelerisque lacus. Curabitur in posuere lacus.

Credits

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus a pretium leo. Praesent eu sem nulla. In nunc arcu, rutrum eu malesuada sed, lacinia nec ipsum.

Logo

© 2023 Lukáš Dušek. Build with Notion.so & Super.so

LinkedInXFacebookInstagramGitHub