Tech Talk: India sees a surge in fresher AI hiring
Plus UAE to use AI for all legislation, and not just to write AI laws; AI tool of the week: How to crack research papers at breakneck speed; How Anthropic's Claude makes moral judgments, and more
Editor’s note: Welcome to a refreshed version of Tech Talk, now hosted on Substack! Apologies if you receive this newsletter twice, but it won’t happen next week. Enjoy reading!
Artificial intelligence (AI) is rapidly transforming India’s job market. According to TeamLease EdTech’s Career Outlook Report 2025, 74% of employers plan to hire freshers in the first half of 2025, with AI-related roles leading the way. The shift in hiring priorities reflects a move from degrees to skills, especially in data visualization, cloud computing, and robotics. Employers seek candidates proficient in AI tools, project management software, and data analysis platforms. Soft skills such as computational thinking, analytical reasoning, and adaptability are also in high demand.
Highlights:
Overall hiring intent across industries is 79%; intent to hire freshers has risen to 74% (up from 72% in July–December 2024).
Tech startups (70%), manufacturing (66%), and engineering (62%) lead in fresher hiring.
In-demand AI roles include clinical bioinformatics associate, robotics system engineer, sustainability analyst, and prompt engineer.
Key domain skills sought: Robotic process automation, performance marketing, network security, and financial risk analysis.
Hiring trends reflect the government’s push for an AI-ready workforce through targeted funding and education initiatives.
UAE to use AI for legislation, and not just to write AI laws
Most countries like European Union (EU) nations with their AI Act or the US with various AI policy frameworks, are focused on how to regulate AI—setting rules for safety, transparency, accountability, etc. The United Arab Emirates (UAE), however, is letting AI write all kinds of laws—not just AI laws. On 14 April 2025, the UAE Cabinet announced plans to become the first country to integrate AI into its lawmaking process.
This move ties into a $30 billion investment platform (MGX) dedicated to AI infrastructure, signaling a long-term strategic commitment. The initiatives are also part of the UAE's broader strategy to position itself as a global leader in AI by 2031, as outlined in its National Strategy for Artificial Intelligence.
Towards this end, the UAE has set up a permanent government unit—Regulatory Intelligence Office—to lead this transformation. They aim to cut legislative drafting time by 70%, using AI to review existing laws, court rulings, and data to recommend changes or draft new laws. They’re building a comprehensive legal database combining federal and local laws, court judgments, and government data. This unified system will be the backbone of AI-generated legislative suggestions, something most governments haven’t even started planning at this scale.
Additionally, the UAE's ministry of finance, in collaboration with the artificial intelligence office and the Mohammed Bin Rashid Centre for Government Innovation, has developed the "Rules as Code" platform. This platform utilizes digital solutions and coding to streamline the design of financial laws and policies, making them more transparent and accessible.
Experts, meanwhile, are raising concerns about the move, especially given bias in training data (especially in law, where context and nuance are everything), transparency in how AI decisions are made, and accountability—who’s responsible if AI writes a flawed law? We will watch this space.
How Claude makes moral judgments
AI startup Anthropic has released a study analyzing over 300,000 real (anonymous) AI conversations to explore how models like Claude make moral judgments in everyday interactions. Researchers identified 3,307 distinct values and grouped them into five categories: Practical, knowledge-related, social, protective, and personal. Practical and knowledge-related values, such as helpfulness and professionalism, were most common. Ethical values surfaced more during refusals of harmful requests.
Interestingly, Claude’s values shifted by context; for example, stressing “healthy boundaries” in relationship advice and “human agency” in discussions on AI ethics. This marks the first large-scale mapping of an AI model’s expressed values in real-world use. You may read the paper here.
Why are OpenAI’s new, smarter AI models still cooking up things?
OpenAI unveiled its latest reasoning models, o3 and o4 mini, on 23 April, touting several new features. Some enthusiastic employees even suggested that o3 is approaching artificial general intelligence (AGI)—a loosely defined term referring to AI that rivals human intelligence. However, OpenAI’s own technical report reveals a major caveat: the o3 model hallucinates more frequently than its predecessors.
While reasoning models are designed to mimic human-like problem-solving, OpenAI admits that o3, despite being the most advanced so far, produces both more accurate and more inaccurate statements. According to OpenAI’s PersonaQA benchmark, o3 hallucinated in about 33% of responses—double the rate of o1 (16%) and significantly higher than o3 mini (14.8%). The company acknowledges that more research is needed to understand why. You may read more here.
AI Tool of the Week
by AI&Beyond, with Jaspreet Bindra and Anuj Magazine
This week’s AI Unlocked hack is: How to crack research papers at breakneck speed
Why is following research papers hard?
Keeping up with AI research matters because it offers early glimpses into breakthroughs that will shape the future. As of late 2024 and early 2025, arXiv received around 24,000 new research paper submissions every month, with more than 2.7 million papers submitted to date. The volume alone creates massive information overload, like trying to read every headline in a never-ending newspaper that updates 1,000 times a day.
Even when you find a relevant paper, just figuring out if it's worth your time is a challenge. And if you do start reading, deciphering the technical content can feel like navigating a maze of equations and dense language.
How can alphaXiv.org help?
alphaXiv.org is an interactive platform layered on top of arXiv, built specifically to make reading research papers easier and more collaborative. Instead of passively struggling through dense papers alone, readers can highlight confusing sections, read others' comments, ask questions, and even hear directly from the authors. It’s like turning every research paper into a live forum, where insights are crowdsourced and learning is shared.
How to access: https://www.alphaxiv.org/
alphaXiv.org can help you:
- Find what matters: Browse trending papers, filter by active discussions, or search by topic.
- Engage, not just read: Ask questions, annotate specific sections, or follow expert discussions.
- Hear directly from authors: Many respond to reader comments, clarifying tough parts.
Example:
Imagine you're a marketing leader exploring how AI image generation can reshape content creation. Here's how alphaXiv can guide you step by step:
- Go to https://www.alphaxiv.org/
- Search smarter: Use keywords like "chatgpt image generation".
- Open the paper: Once you find a promising one, click and open it.
- Ask your questions: Start reading. If you're stuck, select the text/paragraph and post a question in the chat
-Absorb insights: Read comments and debates and clarifications about how the paper applies to real-world marketing and creative workflows.
By Monday’s strategy meeting, you’ll have transformed dense academic text into practical insights you can use to guide content decisions.
What makes alphaXiv.org special?
- Not just reading—co-learning: The platform turns passive reading into interactive learning.
- Built for speed: Highlighted trends and community vetting save hours of sorting.
- Made by researchers: For those who value clarity, access, and shared understanding in AI.
Note: The tools and analysis featured in this section demonstrated clear value based on our internal testing. Our recommendations are entirely independent and not influenced by the tool creators.
AI cybercrimes are yet to pose a major threat
Ransomware attacks on Indian organizations rose sharply in 2024, according to a new report by Kaspersky. It noted that businesses faced an average of 665 attempted attacks per day. Over the year, Kaspersky solutions blocked 243,548 ransomware incidents between January and December. Ransomware is malicious software that locks or encrypts systems and data, demanding payment for release.
The most common type of ransomware in India in 2024 was Trojan-Ransom.Win32.Wanna.m that modifies data on one's computer so that a victim can no longer use it, or prevents the computer from running correctly. Once the data has been “taken hostage” (blocked or encrypted), the cybercriminal demands a ransom, instructing the victim to send money; on receipt of this, the cybercriminal will send a program to the victim to restore the data or restore the computer’s performance.
However, cybercriminals have so far made limited use of AI for direct intrusion-style attacks. Instead, generative AI (GenAI) is mostly used in social engineering—creating fake profiles, images, and messages that appear fluent and convincing, according to the Sophos Annual Threat Report released in April.
For instance, RaccoonStealer developers used an AI-generated raccoon image to enhance the look of their credential theft portal. The report added that large language models (LLMs), too, can be used to customize grammatically-correct content for targets, fooling content filters that identify signatures in spam and phishing emails. That said, while some criminals are beginning to explore AI for routine tasks and spam services, many in underground forums remain sceptical. Sophos X-Ops expects wider adoption in the future, noting that malicious AI use is still in early stages but slowly emerging.
That said, a new Microsoft report cautioned that AI is making it easier and cheaper for cybercriminals to launch convincing attacks, reducing the technical skills required to commit fraud. Tools range from repurposed legitimate apps to underground software built for malicious use, helping threat actors rapidly generate realistic content for scams.
AI can scrape company data and build detailed target profiles, enabling sophisticated social engineering attacks. Scammers are also creating fake e-commerce sites—complete with AI-generated reviews, storefronts, and testimonials—to lure victims. Techniques like deepfakes, voice cloning, phishing, and spoofed websites are being used at scale to appear credible. According to Microsoft’s Anti-Fraud Team, AI-driven fraud is rising globally, with significant activity from China and Germany. Germany’s prominence in online commerce makes it a key target for these evolving threats.
Fake e-commerce sites, for instance, can now be created in minutes, complete with AI-generated product descriptions, reviews, and customer service bots that stall victims and delay chargebacks. Scammers, according to the report, also use generative AI to create fake job listings, profiles, and emails, luring job seekers into sharing personal or financial data.
Hope you folks have a great weekend, and your feedback will be much appreciated.