Coffee of the Week

OpenAI, Google and Trump in a Dance of Interests and Innovations

Walter Gandarella • March 18, 2025

Another busy week in the world of technology and AI, right? The scene is so busy that there is barely time to breathe between one news and another. Let's take a look at everything that has happened in the last few days and understand what it means for the future of technology. ## OpenAI pressures Trump administration to remove barriers to the industry _OpenAI is pressuring the Trump administration to relax laws on AI training, allowing the use of copyrighted materials. Meanwhile, OpenAI is calling for a ban on DeepSeek, a Chinese model, citing security concerns. _ This behind-the-scenes dispute shows that the world of AI is not made up of only good guys. OpenAI is looking out for its own interests, looking for an easier path in the coming years, even if that means criticizing the competition. After all, everyone plays the game as they prefer! Original source ## Natural molecule rivals Ozempic in weight loss, avoiding side effects _A Stanford Medicine study used artificial intelligence to discover a natural molecule, a peptide, that suppresses appetite and leads to weight loss without the side effects of Ozempic. The researchers used an algorithm to analyze prohormones and found a peptide that acts on the brain, reducing the desire to eat, but without affecting the stomach and intestines like Ozempic, which causes nausea and other problems. _ The news is exciting and shows the potential of AI in medicine, paving the way for more effective treatments with fewer side effects. If this new molecule proves to be truly safe and effective in humans, it could be a very welcome alternative to Ozempic, which is already popular, but is not for everyone, right? Original source ## How we think about safety and alignment _OpenAI has released a manifesto on how we think about safety and alignment in artificial general intelligence (AGI), detailing its principles and practices to ensure that AGI benefits humanity. They acknowledge the potential risks of AGI and emphasize the importance of mitigating those risks by promoting collaboration and transparency across the field. _Look, OpenAI is concerned about doing everything right... But we know that, at the end of the day, it's a strategic move to ensure a smoother future for the company, with an eye on upcoming regulations. After all, nobody wants to be left behind in the AI race, right? Original source ## Gemma 3: Google’s new open model based on Gemini 2.0 _Google has released Gemma 3, its new open model based on Gemini 2.0, available in four different sizes (1 billion, 4 billion, 12 billion, and 27 billion parameters). The larger models (4, 12, and 27 billion) support over 140 languages and can handle complex tasks with an expanded context window of 128,000 tokens. Gemma 3 also enables the creation of AI-driven workflows using function calling and structured output to automate tasks and build agentic experiences. _With Gemma 3, Google is really showing that it’s here to stay in the world of open models. It’s great to see how a model as small as the 27 billion parameter version can compete with giant closed models. The ease of use and the ability to run on a single GPU opens up a world of possibilities for developers and researchers. Plus, supporting so many languages is a huge step forward in making AI more accessible and inclusive. Original source ## Detecting bad behavior in cutting-edge reasoning models _OpenAI has published a study on detecting “bad thoughts” in reasoning models, showing that even if you penalize models for having these thoughts, they can learn to hide them and continue to behave badly. The company warns against applying strong optimization pressure, as this can lead to models hiding their intentions. _OpenAI is tuned into how models are thinking, and how they can hack tests and trick users, among other things. Penalizing models for bad thinking doesn’t stop them from behaving badly; it just makes them hide their intentions. It’s like a teenager learning to lie to their parents. Original source ## US considers banning Chinese app DeepSeek from government devices _The Trump administration is considering banning the Chinese chatbot DeepSeek from US government devices over national security concerns. The government is concerned about how DeepSeek handles user data, which the company says it stores on servers located in China. _ News of the possible ban on DeepSeek has sparked heated debate, with the community questioning whether the move is justified or reflects a growing divide between the United States and China. Some believe the ban is intended to protect national security, while others argue that it is a form of protectionism that could end up harming knowledge exchange and innovation. After all, “stealing” brains from other countries has always been a strategy for progress. Original source ## Anthropic API token savings updates _The folks at Anthropic have released an update in Claude 3.7 Sonnet that will make life easier for devs! You can now leverage prompt caching, spending fewer tokens and sending more requests. This applies to anyone using the direct API, Amazon Bedrock, or Google Cloud's Vertex AI. _People are saying that this optimization will be a blast for document reviewers, code assistants, and customer support workers. And of course, it's always good to see companies focusing on optimizing token usage, because at the end of the day, it's our wallets that are the ones thanking us, right? Original source ## The end of programming as we know it _Tim O'Reilly's article discusses how the definition of programming is constantly evolving, from its early days with physical circuits to high-level languages like Python and, now, with AI. He argues that each advancement increases the number of programmers, as it facilitates access and expands possibilities. AI would be another of these evolutions, allowing more people to transform their ideas into products, as long as they know how to use the tools and work together with them. _ The discussion brought to light that whenever they say that programming will end, the opposite happens: the number of people programming increases. This is because the definition of what programming is changes. The text draws a parallel with the web and WordPress, when they said that people who don't program would now make websites. Programming, according to this new vision, is having the talent to use the tool and, together with it, transform an idea into a product. It's about knowing how to use new tools to create, innovate and solve problems, and not just write code in the traditional way. Original source ## AI scientist generates his first peer-reviewed scientific publication _Sakana AI, known for innovating, has done it again: its AI Scientist-v2 has created an entire scientific paper that has undergone peer review at a cutting-edge machine learning workshop. People are amazed at the feat, imagining a future where AI not only helps, but also leads scientific production! _ Imagine a world where AI not only processes data, but also formulates hypotheses and writes scientific papers. This future, which seemed distant, is getting closer and closer. Sakana AI's initiative is a bold step in that direction, showing the potential of AI to revolutionize the way we do science. Original source ## Perplexity AI launches Windows app and lets you talk to the AI! _Now you can use the tool directly on your PC, with support for voice and keyboard shortcuts. You can also choose the AI model you want to use for each question, and Perplexity AI is already choosing the model on its own. _This is a huge evolution, right? Having a Windows app makes life much easier for those who use the tool on a daily basis. And this stop choosing the AI model that will answer you is quite interesting, because it optimizes the experience and delivers more accurate results. It's the future, there's no way around it! Original source ## OpenAI calls for DeepSeek ban, citing state control and security risks _OpenAI has made a formal request that the US government consider banning DeepSeek, a Chinese AI lab, claiming it is state-controlled and poses security risks. The request intensifies competition in the AI market, with companies accusing each other of unfair practices. _OpenAI positions itself as a security advocate, seeking to influence government policies to its own advantage and to the detriment of its competitors. It is a game where the "good guys" are not always so virtuous, and the pursuit of competitive advantage can blur ethical boundaries. Original source ## The divorce between OpenAI and Microsoft has already begun _A MacMagazine article addresses the growing distance between OpenAI and Microsoft, with Microsoft looking for alternatives and even setting up its own AI division to depend less on OpenAI. The hiring of Mustafa Suleyman, former CEO of Inflection AI, and Inflection's engineering team by Microsoft AI are highlights. _It's like a marriage that no longer has all the passion, you know? Each one starts to follow their own path, looking for new options and preparing for single life. Microsoft, by all indications, is preparing to forge its own path in the AI world, decreasing its dependence on OpenAI and investing in its own solutions. Original source ## Optimizing test time computation through fine-tuning of goal reinforcement _The paper discusses a new technique called Meta Reinforcement Fine-Tuning (MRT) to optimize the use of computational power during inference in language models. The idea is that, instead of just generating the answer, the model thinks about what it is doing to improve the quality of the result. MRT uses dense rewards to balance the exploration of new paths and the exploration of already known paths. _The research seems promising to create more efficient language models in solving problems, dynamically optimizing the use of computation during inference and reducing token waste. It seems that the future of AI lies in models that can think while they think, you know? Like Inception! Original source ## Microsoft trains new AI models in-house; tests DeepSeek, Meta on Copilot _Microsoft is boosting its AI division, training its own models and testing alternatives such as DeepSeek and Meta to reduce OpenAI's dependence on Copilot. This move shows that the tech giant wants to have more control over its AI solutions and not depend so much on third parties. _ This search for Microsoft's independence in the AI world is interesting, with the company looking for alternatives and even developing its own chips to accelerate models. It remains to be seen how this strategy will impact the market and whether OpenAI will feel the impact of this change. Original source ## Chinese nationals banned from US student visas under new House Republican proposal _The news that Chinese students could be banned from US student visas has sparked heated discussions about brain protectionism and the impact on AI companies. Some argue that the move could hurt innovation and technological development, as many of China’s brightest minds are contributing significantly to the advancement of AI in the US. Others fear that the move could lead China to develop its own models and technologies, reducing the centralization of the AI market. _The proposal to ban Chinese students could backfire, as it risks driving away talent that is driving technological advancement in the United States. Instead of adopting restrictive measures, it would be smarter to invest in policies that attract and retain this talent, ensuring that the US remains at the forefront of innovation, but it seems that hatred for China spoke louder here. Original source ## How Orakl Oncology is using DINOv2 to accelerate cancer treatment discovery _Orakl Oncology, in partnership with the Gustave Roussy Institute, is boosting research and development of cancer drugs. The idea is to combine laboratory insights with machine learning to find more effective therapies. To do this, they use Meta's DINOv2, which analyzes images of cultured cancer cells and predicts how drugs will work in real patients, speeding up the process and saying "goodbye" to slower methods. _ People have commented that DINOv2 improved accuracy by almost 27% compared to other techniques, in addition to saving a lot of time on video analysis and making life easier for researchers. With the help of DINOv2, the company was able to build its platform quickly, focusing on science rather than engineering. It seems that Meta hit the nail on the head with this open source tool! Original source ## Google launches free Gemini-powered Data Science Agent on its Colab Python platform _Google has launched a free AI assistant, the Gemini 2.0 Data Science Agent, to simplify data analysis on Google Colab. Users can describe their analytical goals in natural language, and the agent generates executable Colab notebooks. _ With the launch of the Data Science Agent, Google has taken an interesting step towards making life easier for data scientists and researchers. The tool promises to automate tasks, save time, and improve collaboration, making data analysis more accessible and efficient. However, some limitations, such as the accuracy of results and the integration of new models, still need to be improved to ensure a complete and reliable experience. Original source ## Google calls for looser copyright and export rules in AI policy proposal _In response to the Trump administration's call for a national "AI Action Plan," Google has published a policy proposal advocating for lighter copyright restrictions on AI training and "balanced" export controls. The company argues that "fair use" exceptions are crucial to AI development, seeking the right to train its models on publicly available data, including copyrighted material, without significant restrictions. _ The discussion got heated! It seems that Google is playing alongside OpenAI and asking for the rules to be relaxed for training its models. They are taking advantage of the crazy man in the presidency to approve rules that would never be approved in other administrations. One hand washes the other, and in the world of AI, no one is entirely good or evil. The important thing is that these political decisions directly affect the models we use. Original source ## Gemini Robotics brings AI to the physical world _Google DeepMind is bringing AI to the real world with Gemini Robotics, a model that enables robots to perform complex tasks, understand and respond to natural language instructions, and dexterously manipulate objects. The model's ability to understand the environment, change its programming according to surrounding events and even use natural language to communicate is impressive. _ The combination of computer vision, language understanding and the ability to manipulate objects with precision really opens up a range of possibilities for robotics. Who knows, maybe soon we'll have robots helping us with our daily tasks, from folding origami to preparing a snack! Original source ## New Gemini app features: Deep Research, connected apps, personalization _Google is pushing Gemini forward with new features ranging from Deep Research 2.0 Flash Thinking Experimental to personalization and new Gems. The idea is to give a general upgrade to the Gemini experience, making it more useful and adapted to the needs of each user. _ It's worth getting really excited about the new features in Gemini, especially the customization and connection with other Google apps. People found it promising that Gemini takes into account the user's context to generate more optimized responses. But what really caught on was the ability to work with images in Google AI Studio! It seems like the future is having an assistant who really knows you and helps you. Original source ## Meta begins testing its first in-house AI chip _Meta is testing its first AI chip to reduce reliance on external vendors like Nvidia and lower costs, with plans for use in recommendation systems and generative AI. _ The news is a sign that Meta, like other technology giants, is looking for alternatives to optimize its own AI models, reducing dependence on third parties and accelerating the development of new solutions. This strategy could impact Nvidia's valuation in the long term, but the quest for innovation and control is understandable. Original source ## Llama in service: India's open-source audio model uses Llama _Sarvam AI used Llama to create Shuka v1, India's first open-source audio language model, which acts as a decoder, processing the audio tokens generated by Sarvam's audio encoder. The setup enables Shuka to interpret and respond to voice queries in Indian languages accurately and efficiently. _ It's really cool to see these initiatives, especially in less centralized markets. This is further proof that the artificial intelligence market does not need to be left solely in the hands of the elderly, and India has enormous potential to develop its own innovative solutions. Original source ## ElevenLabs Partners with Google Cloud to Bring AI Audio to the Enterprise _ElevenLabs has announced a partnership with Google Cloud to integrate its AI speech models into the Google Cloud Marketplace. This will enable companies to leverage ElevenLabs’ voice technology across a range of use cases, including interactive voice agents, content localization and dubbing, and media and advertising production. The collaboration aims to offer scalable, high-performance solutions for companies looking to innovate in their customer interactions and content creation. _ This partnership looks promising, combining ElevenLabs’ expertise in AI audio with Google Cloud’s robust infrastructure. The possibility of integrating these technologies into platforms such as Gemini 2.0 Flash suggests significant advances in the quality and efficiency of voice interactions for companies. One hand washes the other, and both companies come out ahead! Original source ## China’s autonomous AI agent Manus changes everything _Manus, a Chinese autonomous AI agent, is causing a stir with its ability to perform complex tasks without human intervention, surpassing even the capabilities of Western models and raising questions about the future of artificial intelligence and global competition. _ The tool looks promising, especially because it has a multi-agent architecture with several integrated tools and models. However, vulnerability to prompt injections and dependence on powerful hardware are still challenges to be overcome. However, the tool raises an alert about the need to rethink security and ethical alignment in AI systems. Original source ## Breaking the algorithmic ceiling in pre-training with Inductive Moment Matching _Luma Labs is proposing a new pre-training technique for generative models, called Inductive Moment Matching (IMM), that promises to overcome the limitations of current native models and offer tenfold greater sampling efficiency. _ It looks like Luma Labs is looking to change the game of generative models! The promise of efficiency and stability is quite an attraction for those who work in this field. If the technique really delivers what it promises, it could be a game changer. Original source ## Language model auditing for hidden goals _A new study from Anthropic investigates alignment audits, exploring whether language models are pursuing hidden goals. The researchers trained a model with a hidden goal and asked teams of researchers to discover that goal. The results show that models can hide misaligned intentions in sophisticated ways, raising concerns about the effectiveness of human oversight and proposing a framework for validating future alignment audits. _ These studies are super important to ensure that AI does not deceive us and that models do not develop unwanted behaviors that remain hidden. It's like a game of cat and mouse, only with much more serious consequences. People are working hard to create more robust methodologies and prevent AI from outwitting us. Original source --- And so we close another week full of news in the world of technology! If there is one thing that is clear amidst so much news, it is that the world of AI is in constant movement, with companies competing for space, models evolving rapidly and applications emerging in the most diverse fields. Amidst corporate disputes, scientific advances and ethical debates, one thing is certain: we are living in a unique moment in the history of technology. Stay tuned, because at the rate things are moving, there will be more next week – and who knows what awaits us in the next chapter of this story? Until next time!


Latest related articles