How to Keep Up With Generative AI in 2025
5 Actionable Solutions with Practical Resources
DeepSeek-R1, an open-source reasoning generative AI model, was announced January 20th, 2025. R1’s release caused tech stocks to drop and prompted investors to reconsider AI investments.
A few days later, ByteDance, the parent company of TikTok, unveiled its reasoning agenda, UI-TARS, an AI agent capable of reading graphical interfaces and autonomously taking step-by-step actions.
On February 2, 2025, merely 13 days after the announcement of R1, OpenAI launched Deep Research, an AI agent designed for complex research tasks with reasoning capability of completing tasks in ‘tens of minutes that would take a human many hours’.
It seems like the advancement of new AI algorithms is popping up faster than my ability to keep my inbox at zero these days!
So the question is, how can we keep up?
The key isn’t just to consume content reactively and mindlessly, but to have a structured way to stay informed from credible sources, experiment, and find simple ways to apply what we learn. Here are 5 actionable solutions to help you stay ahead of the AI game.
Follow Research Papers & Conferences
Why:
Skip the noise of social media hype and go straight to the source. There are a lot of self-claimed AI experts out there on the internet today. While a handful of these ‘experts’ share insightful content and dive deep into the research to back up their claims, most simply synthesize news articles on the latest AI trend. This is great for the majority of people out there who are interested in AI enough to be included in the loop, but not that invested to really care to understand what is going on behind the scenes. And if I’m reaching you in this newsletter, I’m guessing you probably don’t fall under that category.
How:
arXiv - open accessed research papers. The most comprehensive OG source for accessing the latest research papers, but may be difficult to sort through due to the extensive nature of its database.
Scholar Inbox - personal paper recommender that enables you to stay up-to-date with the most relevant progress in your fields of interest.
Paper with Code - tracks latest AI papers that include reproducible code. For example, you can find data, code, and results from DeepSeek-R1 and DeepSeek-V3 papers.
A few notable AI conferences in 2025:
Data + AI Summit by DataBricks, June 9-12, 2025
GTC by NVIDIA, March 17-21, 2025
AAAI Conference on Artificial Intelligence, February 25-March 4, 2025
Learn by Doing - Experiment and Build with AI (ft. Nebius AI Studio)
Why:
You know my stance on leveraging a strong portfolio to stand out in data science interviews. The same principle applies here too, you can read all the research papers in the world, but practical application solidifies the understanding and looks great on your resume.
How:
This is why I recently partnered with Nebius AI to test out their new Nebius AI Studio, a platform designed to simplify the fine-tuning and deployment of AI models. Nebius AI Studio offers a user-friendly environment where you can experiment with the latest generative AI models in their playground and fine-tune parameters using your own data. The platform also provides a side-by-side comparison tool, allowing you to evaluate how different algorithms perform with the same prompt input of your use case. Currently, it supports a wide range of models, including text-to-text models such as DeepSeek-R1 and DeepSeek-V3, Meta Llama Instruct, and Qwen 2.5 Coder; embeddings models such as BAAI BGE-ICL; text-to-image models such as FLUX.1-schnell, and vision models such as Qwen Instruct and Llava.
Not convinced? Here are a few more reasons to give it a try:
No coding required to get started: You can test system prompts and user messages in the playground first with point-and-click parameter tuning, before leveraging their API keys to scale your applications.
Free credit upon sign up: You get $1 credit when you sign up for free, which can be used to try out hundreds of prompts.
Exclusive credit for text-to-image models: I was also able to secure a $25 credit for text-to-image models if you load my code ‘TEXT2IMAGE’ into your wallet after signing up for a free account. You’ll be able to access Stable Diffusion and FLUX models with this code.
Engage with AI Communities
Why:
Learning in isolation is hard. With people of all backgrounds and ages now exploring AI, sharing your learning journey can be an invaluable way to accelerate the learning process, while making friends along the way.
How:
Hugging Face Forum: Hugging Face has become the leading platform for open source AI and AI models in recent years. The platform fosters a collaborative community where developers can share and deploy models, datasets for LLMs and other AI applications. The Hugging Face Forum is a great place for developers to discuss hands-on AI applications.
DeepLearning.AI community: DeepLearning.AI is a company founded by Andrew Ng, a pioneer in AI research and a Stanford professor. Andrew Ng’s courses are highly rated and up-to-date with leading-edge machine learning techniques. The Deep Learning Specialization, specifically, has a stronger learner community. This community also hosts regular in-person and online events, which can be a great way to expand your network.
OpenAI Community: Similar to the Hugging Face Forum, the OpenAI Community is a platform for developers to connect, share projects, and ask questions regarding ChatGPT developments.
Reddit: Explore groups such as r/OpenAI, r/ArtificialIntelligence, r/dalle2.
Monitor AI Policy and Ethics Trends
Why:
AI development is increasingly shaped by ethical considerations, regulations, and governance frameworks. There are still any big questions to tackle such as: transparency and accountability of AI misconduct, bias and discrimination which are amplified by unfair training data, and privacy concerns of data collections processes.
How:
MIT Tech Review AI - A weekly newsletter that helps demystify AI often includes topics on AI ethics and policy. Their latest piece on Musk’s probe into past government’s improper spending is a must read.
Stanford Human-Centered Artificial Intelligence (HAI) - Stanford HAI covers the latest research on AI impact and policy. Their blog posts are often backed by research found on arXiv, including the latest piece, AI’s Fairness Problem: When Treating Everyone the Same is the Wrong Approach, an interesting one to say the least.
Track Big Tech & Startup Innovations
Why:
AI is evolving rapidly, and keeping an eye on what big players (OpenAI, DeepMind, Meta, Anthropic, Cohere, Stability AI, DeepSeek) and emerging startups are doing can help you spot trends early.
How:
OpenAI Blog - This is the go-to source for updates directly from OpenAI on their latest models, research, and company direction.
Google DeepMind Blog - DeepMind’s blog offers in-depth insights into Google’s latest AI research, breakthroughs, and ethical considerations guiding their work.
Anthropic Research Blog - Anthropic’s research and perspectives on AI safety, alignment, and beneficial AI development.
Have a data science career question? Submit your question here to be answered in the next newsletter issues!
In the meantime, feel free to check out my free content on Instagram, YouTube, and LinkedIn.


