1
alt/cohort-1-build-better-tracking-setupsalt/cohort-1-build-better-tracking-setupsPosted by u/Timo Dechau3d ago

Day 9 - Define the Properties for ChatGPT

If you have watched the lesson on Day 4 already you will know that I have a close relationship with properties. Reason #1 -> they can give you the hints in your data that will move the needle in your setup. A specific customer attribute that performs better, a campaign that brings in new users that beat every other cohort. Let's have a look at some properties for ChatGPT that can help them to surface these segments. We define the properties based on the entities we have defined: Account: - account_id: we love ids to later enrich the dataset - account_type: free/paid - in a freemium model this is an existential property to segment two very different account types. Also to understand how well the free model works as channel to paid conversion - account_range_lifetime_chats: how many chats does this account had in its lifetime. This is a range property to make the analysis straight_forward but we would also track account_num_lifetime_chats - account_days_since_last_chat: helpful to understand power and from time to time users (could also need a ranged property) Chat chat_id chat_range_tokens - to understand the size of the chat, could also be broken down into input and output tokens chat_days_active - to understand how many chats last of days or weeks Again the fun part about properties is that they are quite flexible and often tied to specific analysis use cases. Therefore they can change over time.

1
alt/cohort-1-build-better-tracking-setupsalt/cohort-1-build-better-tracking-setupsPosted by u/Timo Dechau9 days ago

Day 8: ChatGPT - Define the activities

After defining the entities for ChatGPT now it is time to define the activities. Again, remember to think about a lifetime journey with core milestones. Here are the entities that I would use and the activities I would start with: Account - created - the start of the customer journey. Important event for top of the lifetime events - deleted - not a significant event unless you delete accounts after time of inactivity. If you don't do it better use inactivated or set inactive. Subscription - created - renewed - contracted - expanded - cancelled - churned Just the classic MRR bridge/waterfall events Chat - started - finished - if we don't track messages, we would need an event that determines when a chat is finished. This is a classic session calculation case. You could start from the backend to send this after 10m of inactivity. - restarted - important event to send after the chat has been flagged inactive. This helps you to understand if your finished event makes sense or not (when you have plenty of restarts that are just minutes away from the finished) In general, you need to watch this and see if it better needs the messages tracked. Again tracking messages would be semantically a lot easier but it creates massive volumes. If you track messages, you just need a message sent event. Task - started - finished The tasks are special tasks that the model will perform. It would be an abstraction on top of a tool use. This would need a good task categorization, like deep research, web search, image generation, code generation.

1
alt/cohort-1-build-better-tracking-setupsalt/cohort-1-build-better-tracking-setupsPosted by u/Timo Dechau10 days ago

Day 7: Getting our hands dirty: ChatGPT entities

It's now a good time to practice more our skills that we have developed now over the last seven days. What I want you to do is to think about how would you design the entities for ChatGPT. Let's be warned, this is not a simple example. The simple examples we had in the videos. ChatGPT is something that is a little bit exotic because it behaves differently like other products, but therefore it's a super interesting exercise. The exercise for today is: 1. Define the core fundamental entities for ChatGPT. Really think about what is maybe the core entity that is really driving ChatGPT. This might be quite obvious, but also think about the entities surrounding that. 2. Think a little bit more about current strategic moves that OpenAI is doing (just with latest releases). Where they have Operator that is doing stuff for you on the Internet, then they have this new Agent Mode where you can build workflows then they have Codex where you can develop applications from your command line. There are different kinds of directions, so feel free to just pick one and think about how could you create the entities for that. I will create my versions of this as well tomorrow morning, so I will let you know what my thoughts are about that. If comments are not working here, make sure that you first join the space and then write the comments. If they're still not working, just send me your answers via email, and I will have a check. I will also check how we can get the comments up and working, so that we all have the possibility to participate in that.

2
alt/cohort-1-build-better-tracking-setupsalt/cohort-1-build-better-tracking-setupsPosted by u/Timo Dechau22 days ago

Tracking Fail #2: "We Forgot to Add Tracking, But We Have to Go Live Tomorrow"

The most expensive sentence in product analytics. Picture this: Six months of development. Countless sprints. Multiple rounds of QA. The feature is polished, tested, and ready. Tomorrow's the big launch. Then someone asks: "Did we add the event tracking?" Silence. We all have been here. The Sliding Backlog Syndrome Here's how it typically unfolds: Sprint 1: "We'll add tracking in the next sprint" Sprint 5: "Let's focus on the core feature first" Sprint 10: "We're too close to launch to risk breaking anything" Launch day: "Can we add it quickly?" The answer is either a rushed, error-prone implementation or launching blind—neither option is good. Why This Keeps Happening Event tracking consistently slides down the backlog because it's never seen as "blocking" the feature. The feature works without it, right? But here's what you're actually launching without: -> No idea if users even find the feature -> No way to measure if it solves the problem -> No data to guide iterations -> No proof of ROI for the six months invested (ok, this is even hard with data) The Simple Fix One line in your definition of done: "Tracking implemented and tested." (you don't have DoD - well, we should have a different talk). Make tracking requirements part of the initial user stories, not an afterthought. When you write "User can upload video comments," include "Track when video comment is started, completed, and viewed."