6 Lessons for PMs from Twitter's algo reveal
+ 5 AI Tools PMs can use to process customer feedback better
Hi BPL fam,
Time flies, doesn’t it? Quarter 1 is over.
PMs around the world were busy this week sending out performance reports & tinkering with Q2 roadmaps.
What a Q1 - AI dominated the headlines…and our heads. I’m sure a lot of board calls, presentations & roadmaps that occurred this week had a lot of chatter around AI, ChatGPT plugins & LLMs.
Anyhoo — let’s dive into today’s edition of Behind Product Lines. I’ve structured this one into three “lines”:
News Line: Lessons for PMs from Twitter’s recommendation algo
AI Line: 5 AI Tools that’ll help PMs run better customer interviews
Product Line: The PM Superpower that saves teams 100s of hours
6 Lessons for Product Managers from Twitter’s algo reveal
Twitter stole the show this week after it shared the source code for a portion of their recommendation algorithm. This was in line with Elon’s stand on transparency and the benefits of open-source culture.
Even if you don’t agree with something, at least you’ll know why it’s there, and that you’re not being secretly manipulated … The analog, here, that we’re aspiring to is the great example of Linux as an open source operating system … One can, in theory, discover many exploits for Linux. In reality, what happens is the community identifies and fixes those exploits.
Experts around the world had a field day with the code. Analysis poured in from all corners & speculation has been intense.
You can peruse the details of the algo on Twitter or I could save you some time and give you a little visual to sum up the highlights. In short, the graphic below roughly sums up *some* of the factors Twitter uses to boost or diminish a Tweet’s visibility (massive thanks to Steven Tey & Aakash Gupta for their in-depth analysis):
Great. But that’s Twitter. How does that affect us?
There are a few lessons for PMs working on products that deal with user-generated content (UGC) here e.g. social media platforms, marketplaces, e-commerce platforms etc. Such products have to come up with smart ways to rank-order content, especially on search interfaces where sorting by relevancy is a common use case.
Whether it’s a social network sorting posts on a feed or an e-commerce play highlighting recommendations, Twitter’s algorithm is just one blueprint on how such products should reward/retract visibility of UGC.
A few pointers:
Embrace Transparency: Twitter, along with several mainstream social media platforms, have historically been opaque about their algorithms. That makes manipulation fairly easy. Since the platforms are free, users are often thought to be the “product” itself & such sites can decide the narrative they want to sell.
With explicit ranking policies, you anchor the product with the user’s trust. And trust, in this day and age, is a strong moat to have. It also helps content creators to stop guessing & adhere to the decorum the platform wants them need to abide by. Yes, this comes with the risk of allowing users to figure out how to “game” the system but it’s natural for the algo to evolve to include more sophistication over time.
Thus, transparency is key. Be open to your users about how you sheriff their data. Open a 2-way dialog.Encourage quality content: Know what really counts as engagement. Twitter weighs someone staying on a tweet for greater than 2 minutes MORE than a like. Why? Likes can be fickle & have a lower investment than time-on-tweet. Thus, identify what actions on your product truly constitute as meaningful and are unlikely to be gamed.
For example, when ranking items by relevancy on an online store, a team could decided weights as: count of times it is added to a cart > count of times added to favorites > views on detail page.Reward paying subscribers but strike a balance: It was no secret that Twitter was giving blue subscribers some extra oomph. But it’s the balance between organic and paid that is a highlight for me.
Ex: a rookie PM mistake in the classifieds space is to let featured sellers to flood all the prime real estate on your search listings. So, offer premium features but not at the expense of organic content.No tolerance for negativity: Track and penalize users who exhibit harmful behavior, such as spamming, abuse, or publish misleading content — even if they are paying. Negativity breeds negativity and ends up denting your word-of-mouth. Moreover, be a responsible platform. Be aware of how letting harmful content persists affects the community. (Ex: a host on Airbnb flagged for inappropriate behavior can face a life-time ban)
Incentivize desired behavior: Nir Eyal says it best in his book “Hooked”: Ensure that the rewards offered align with your product's goals and encourage favorable behaviors that contribute to the overall success of the product.
Twitter rewards posting multimedia content. Find out what behavior creates a virtuous cycle in your product & create publicized incentives around it.Adaptability and continuous improvement: If you stop evolving your recommendations algo, you’re assuming that your audience doesn’t evolve. Big mistake. But your evolution vector is also important. There were instances in the Twitter code where tweets seemed to be profiled based on whether they were from Elon, a republican or a democrat. That establishes a basis for bias, which is not the way to go.
Twitter headquarters also aired some rather eccentric tactics too:
While staying close to customers & iterating fast are hailed as PM virtues, this seems to be taking it a bit too far.
Twitter has a global user base. There’s a wide spectrum of users that will pool in their ideas. Each will have their own agenda, preferences & line of thinking. One’s wish list item can well be another’s dealbreaker. This also forebodes the creation of a feature factory - a confusing potpourri of everything.
Time will tell how they plan to reconcile this into a coherent product strategy but this approach isn’t found in any PM handbook. IMHO it can go south pretty quickly.
5 AI Tools for PMs to analyze customer feedback better
I thought I’d share a few tools I’ve been checking out:
Better Feedback
Better Feedback looks through all your customer feedback content (across CRMs, helpdesks, email) and uses GPT-4 to allow Product Managers to ask questions in plain English to get answers. This would save tons of time reading & consolidating 100s of messages.
e.g. you could ask things like “What do our customers thing about new search experience?” and you get back a ChatGPT-like reply.
Freetext
FreeText AI is another AI-backed tool that automatically sorts feedback into categories & segments them by sentiment. It provides explanations to help you better understand and internalize the feedback.
Another product that allows this is Dovetail in case you’re interested.
Otter.ai
When conducting a customer discovery interview, writing notes can become a hassle and a distraction. For me, it limits my cross-questioning and dampens my flow. Otter.ai is an AI assistant that sits in your meetings & records the audio, transcribes them, writes live notes, and generates actionable summaries.
Several people in my circle are actively using it and have positive things to say.
SuperNormal
Like Otter, SuperNormal also helps you record meeting notes in one place and then opens that up to your team for collaboration.
But they go beyond just “meeting notes”.
They summarize the highlights of the meeting in bullet points and neatly organize them in segments like “Next steps”, “pain points” and so on to make it easy to action upon. This can also save a lot of time after customer interviews.
Rewind
The concept behind this one is dreamy & fascinating.
Rewind is an AI tool that records everything you say, hear or read and then allows you to query it to retrieve the relevant answers. This would be super useful for PMs. We collect customer evidence from a variety of sources and to be able to query a single interface (rather than to sift hundreds of emails or calls) would save time & level up our effectiveness.
Hope that round-up of AI tools helps. I’m sure there will be many more coming out in the coming months, so keep a look out.
The Product Manager superpower that saves teams 100s of hours
Every Product Manager is supposed to be a problem-solver. That’s a given.
However, everyone has their own way of tackling problems. Some have “bias for action” and leap into exploring solutions. Others crave processes like Double-Diamond and design sprints to spend more time basking in the problem space.
There’s a fundamental problem-solving technique that can be paired up with any process that can literally save 100s of hours.
Problem Re-framing.
What does that mean?
Reframing is a problem-solving technique that focuses on viewing a problem from a different (& conducive) angle, paving the way for more elegant solutions. It allows the team to question assumptions baked in the problem itself and leverage first-principle thinking to deconstruct, step back and say “What is the user really trying to do?”.
It’s a hard one to master but extremely rewarding.
The process of reframing involves 4 steps:
Goal pinning: set the problem statement aside & identify the underlying “job” the user is trying to accomplish.
Spotlighting assumptions: make the assumptions we are making explicitly & question whether they can be challenged, removed or modified.
Angle of attack: Explore other angles of attack outside the existing frame.
Test: See if your new angle of attacks still solves (1).
Too theoretical?
Let’s inspect this with 3 short case studies.
Case Study 1: Faster F1 car
Scenario: Gordon Murray was a F1 racing car designer who was tasked to make a faster car with a limited budget.
Original problem frame
How can we make the car go faster?
Proposed solutions: Use a faster engine, improve structural design, reduce drag.
Now, here’s how the reframe process looks like:
Pinned goal: Increase the speed of the car.
Assumptions: To make the car go faster, you need structural re-designs to existing components.
Angle of attack: What if we keep the car the same but reduce the load it has to carry? A lighter car with the same engine would have lesser weight to carry and thus, would accelerate better.
Test: Make a lighter car with the same engine = Increasing speed of the car.
Reframed Problem
How can we make the car lighter?
Solution: Let go of unnecessary equipment. Choose lighter components across the frame of the car.
As you can see the design mindset completely changes with the reframe. It also aligns with the budgetary constraints.
Case Study 2: Out of Stock Items
Product = Instacart Grocery Delivery App
Scenario: Shoppers place an order for groceries. The payment is captured. However, they are later notified that one or more items are no longer in stock.
Original problem frame
How can we make the app sync better
with actual stocking levels at partner stores?
Proposed solutions: Collaborate with partner stores & deploy native integrations with stocking software or implement predictive stock tracking.
Note: We’re assuming this problem is occasionally observed and is not exceedingly frequent. If the app is always out of sync, that’s a larger issue to tackle.
Now, here’s what the reframe process looks like:
Pinned goal: What will a better sync enable? A happy user that gets to check out & acquire a product without cancellations.
Assumptions: The user wants the same SKU and quantity that they initially requested.
Angle of attack: Would the user be open to getting an equivalent alternate to what they requested? Would they mind getting a different brand or quantity of certain grocery items?
Test: Allow a user to pick a backup if an item is not in stock = User acquires a product they need.
Reframe
How can enable users to specify
alternates if certain products are out of stock?
Solution: Create a backup selection flow within the app or prompt the user when an item isn’t found with a list of alternatives they can pick with a tap. (easier to implement)
Case Study 3: Faster Video Editing
Product: Descript - video editor.
Scenario: Users spend a LOT of time on video editors to process podcasts. The majority of the time is taken to remove filler words like “ums” and “ahs”.
Original problem frame
How can we improve the video editor interface to
help edit out filler words faster?
Proposed solutions: Use intelligence to detect patterns for filler words. Highlight segments in the timeline where these words are found to guide editors to clip out.
This can be a cumbersome process. The video specialist still has to verify the highlighted clip has a filler word. They would still need to adjust the timeline markers to clip out the exact part.
Now, here’s how the reframe process looks like:
Pinned goal: A podcast without filler words. (note we didn’t mention the editor interface)
Assumptions: The frame timeline interface is the only way to edit videos.
Angle of attack: Transcription technology has become way better now. Could we transcribe an audio file and then edit the video like a document instead? i.e. find the ums and ahs written in the transcript, delete them & have the system delete the corresponding audio frames?
Test: Transcribe, delete unwanted filler words and reflect the same on audio=> a podcast without filler words.
Reframe
How can we transcribe the audio &
make edits like a document?
Solution: Descript accomplished this with their transcription technology:
That’s all for this edition of Behind Product Lines. Till next week, chow!