Connect with us

Jobs & Careers

7 Mistakes Data Scientists Make When Applying for Jobs

Published

on



Image by Author | Canva

 

The data science job market is crowded. Employers and recruiters are sometimes real a-holes who ghost you just when you thought you’d start negotiating your salary.

As if fighting your competition, recruiters, and employers is not enough, you also have to fight yourself. Sometimes, the lack of success at interviews really is on data scientists. Making mistakes is acceptable. Not learning from them is anything but!

So, let’s dissect some common mistakes and see how not to make them when applying for a data science job.

 
Mistakes Data Scientists Make When Applying for Jobs

 

1. Treating All Roles the Same

 
Mistake: Sending the same resume and cover letter to each role you apply for, from research-heavy and client-facing positions, to being a cook or a Timothée Chalamet lookalike.

Why it hurts: Because you want the job, not the “Best Overall Candidate For All the Positions We’re Not Hiring For” award. Companies want you to fit into the particular job.

A role at a software startup might prioritize product analytics, while an insurance company is hiring for modeling in R.

Not tailoring your CV and cover letter to present yourself as highly suitable for a position carries a risk of being overlooked even before the interview.

A fix:

  • Read the job description carefully.
  • Tailor your CV and cover letter to the mentioned job requirements – skills, tools, and tasks.
  • Don’t just list skills, but show your experience with relevant applications of those skills.

 

2. Too Generic Data Projects

 
Mistake: Submitting a data project portfolio brimming with washed-out projects like Titanic, Iris datasets, MNIST, or house price prediction.

Why it hurts: Because recruiters will fall asleep when they read your application. They’ve seen the same portfolios thousands of times. They’ll ignore you, as this portfolio only shows your lack of business thinking and creativity.

A fix:

  • Work with messy, real-world data. Source the projects and data from sites such as StrataScratch, Kaggle, DataSF, DataHub by NYC Open Data, Awesome Public Datasets, etc.
  • Work on less common projects
  • Choose projects that show your passions and solve practical business problems, ideally those that your employer might have.
  • Explain tradeoffs and why your approach makes sense in a business context.

 

3. Underestimating SQL

 
Mistake: Not practicing SQL enough, because “it’s easy compared to Python or machine learning”.

Why it hurts: Because knowing Python and how to avoid overfitting doesn’t make you an SQL expert. Oh, yeah, SQL is also heavily tested, especially for analyst and mid-level data science roles. Interviews often focus more on SQL than Python.

A fix:

  • Practice complex SQL concepts: subqueries, CTEs, window functions, time series joins, pivoting, and recursive queries.
  • Use platforms like StrataScratch and LeetCode to practice real-world SQL interview questions.

 

4. Ignoring Product Thinking

 
Mistake: Focusing on model metrics instead of business value.

Why it hurts: Because a model that predicts customer churn with 94% ROC-AUC, but mostly flags customers who don’t use the product anymore, has no business value. You can’t retain customers that are already gone. Your skills don’t exist in a vacuum; employers want you to use those skills to deliver value.

A fix:

 

5. Ignoring MLOps

 
Mistake: Focusing only on building a model while ignoring its deployment, monitoring, fine-tuning, and how it runs in production.

Why it hurts: Because you can stick your model you-know-where if it’s not usable in production. Most employers won’t consider you a serious candidate if you don’t know how your model gets deployed, retrained, or monitored. You won’t necessarily do all that by yourself. But you’ll have to show some knowledge, as you’ll work with machine learning engineers to make sure your model actually works.

A fix:

 

6. Being Unprepared for Behavioral Interview Questions

 
Mistake: Brushing off questions like “Tell me about a challenge you faced” as non-important and not preparing for them.

Why it hurts: These questions are not a part of the interview (only) because the interviewer is bored to death with her family life, so she’d rather sit there with you in a stuffy office asking stupid questions. Behavioral questions test how you think and communicate.

A fix:

 

7. Using Buzzwords Without Context

 
Mistake: Packing your CV with technical and business buzzwords, but no concrete examples.

Why it hurts: Because “Leveraged cutting-edge big data synergies to streamline scalable data-driven AI solution for end-to-end generative intelligence in the cloud” doesn’t really mean anything. You might accidentally impress someone with that. (But don’t count on that.) More often, you’ll be asked to explain what you mean by that and risk admitting you’ve no idea what you’re talking about.

Fix it:

  • Avoid using buzzwords and communicate clearly.
  • Know what you’re talking about. If you can’t avoid using buzzwords, then for every buzzword, include a sentence that shows how you used it and why.
  • Don’t be vague. Instead of saying “I have experience with DL”, say “I used long short-term memory to forecast product demand and reduced stockouts by 24%”.

 

Conclusion

 
Avoiding these seven mistakes is not difficult. Making them can be costly, so don’t make them. The recruitment process in data science is complicated and gruesome enough. Try not to make your life even more complicated by succumbing to the same stupid mistakes as other data scientists.
 
 

Nate Rosidi is a data scientist and in product strategy. He’s also an adjunct professor teaching analytics, and is the founder of StrataScratch, a platform helping data scientists prepare for their interviews with real interview questions from top companies. Nate writes on the latest trends in the career market, gives interview advice, shares data science projects, and covers everything SQL.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Jobs & Careers

NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue

Published

on



NVIDIA disclosed on August 28, 2025, that two unnamed customers contributed 39% of its revenue in the July quarter, raising questions about the chipmaker’s dependence on a small group of clients.

The company posted record quarterly revenue of $46.7 billion, up 56% from a year ago, driven by insatiable demand for its data centre products.

In a filing with the U.S. Securities and Exchange Commission (SEC), NVIDIA said “Customer A” accounted for 23% of total revenue and “Customer B” for 16%. A year earlier, its top two customers made up 14% and 11% of revenue.

The concentration highlights the role of large buyers, many of whom are cloud service providers. “Large cloud service providers made up about 50% of the company’s data center revenue,” NVIDIA chief financial officer Colette Kress said on Wednesday. Data center sales represented 88% of NVIDIA’s overall revenue in the second quarter.

“We have experienced periods where we receive a significant amount of our revenue from a limited number of customers, and this trend may continue,” the company wrote in the filing.

One of the customers could possibly be Saudi Arabia’s AI firm Humain, which is building two data centers in Riyadh and Dammam, slated to open in early 2026. The company has secured approval to import 18,000 NVIDIA AI chips.

The second customer could be OpenAI or one of the major cloud providers — Microsoft, AWS, Google Cloud, or Oracle. Another possibility is xAI.

Previously, Elon Musk said xAI has 230,000 GPUs, including 30,000 GB200s, operational for training its Grok model in a supercluster called Colossus 1. Inference is handled by external cloud providers. 

Musk added that Colossus 2, which will host an additional 550,000 GB200 and GB300 GPUs, will begin going online in the coming weeks. “As Jensen Huang has stated, xAI is unmatched in speed. It’s not even close,” Musk wrote in a post on X.Meanwhile, OpenAI is preparing for a major expansion. Chief Financial Officer Sarah Friar said the company plans to invest in trillion-dollar-scale data centers to meet surging demand for AI computation.

The post NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue appeared first on Analytics India Magazine.



Source link

Continue Reading

Jobs & Careers

‘Reliance Intelligence’ is Here, In Partnership with Google and Meta 

Published

on



Reliance Industries chairman Mukesh Ambani has announced the launch of Reliance Intelligence, a new wholly owned subsidiary focused on artificial intelligence, marking what he described as the company’s “next transformation into a deep-tech enterprise.”

Addressing shareholders, Ambani said Reliance Intelligence had been conceived with four core missions—building gigawatt-scale AI-ready data centres powered by green energy, forging global partnerships to strengthen India’s AI ecosystem, delivering AI services for consumers and SMEs in critical sectors such as education, healthcare, and agriculture, and creating a home for world-class AI talent.

Work has already begun on gigawatt-scale AI data centres in Jamnagar, Ambani said, adding that they would be rolled out in phases in line with India’s growing needs. 

These facilities, powered by Reliance’s new energy ecosystem, will be purpose-built for AI training and inference at a national scale.

Ambani also announced a “deeper, holistic partnership” with Google, aimed at accelerating AI adoption across Reliance businesses. 

“We are marrying Reliance’s proven capability to build world-class assets and execute at India scale with Google’s leading cloud and AI technologies,” Ambani said.

Google CEO Sundar Pichai, in a recorded message, said the two companies would set up a new cloud region in Jamnagar dedicated to Reliance.

“It will bring world-class AI and compute from Google Cloud, powered by clean energy from Reliance and connected by Jio’s advanced network,” Pichai said. 

He added that Google Cloud would remain Reliance’s largest public cloud partner, supporting mission-critical workloads and co-developing advanced AI initiatives.

Ambani further unveiled a new AI-focused joint venture with Meta. 

He said the venture would combine Reliance’s domain expertise across industries with Meta’s open-source AI models and tools to deliver “sovereign, enterprise-ready AI for India.”

Meta founder and CEO Mark Zuckerberg, in his remarks, said the partnership is aimed to bring open-source AI to Indian businesses at scale. 

“With Reliance’s reach and scale, we can bring this to every corner of India. This venture will become a model for how AI, and one day superintelligence, can be delivered,” Zuckerberg said.

Ambani also highlighted Reliance’s investments in AI-powered robotics, particularly humanoid robotics, which he said could transform manufacturing, supply chains and healthcare. 

“Intelligent automation will create new industries, new jobs and new opportunities for India’s youth,” he told shareholders.

Calling AI an opportunity “as large, if not larger” than Reliance’s digital services push a decade ago, Ambani said Reliance Intelligence would work to deliver “AI everywhere and for every Indian.”

“We are building for the next decade with confidence and ambition,” he said, underscoring that the company’s partnerships, green infrastructure and India-first governance approach would be central to this strategy.

The post ‘Reliance Intelligence’ is Here, In Partnership with Google and Meta  appeared first on Analytics India Magazine.



Source link

Continue Reading

Jobs & Careers

Cognizant, Workfabric AI to Train 1,000 Context Engineers

Published

on


Cognizant has announced that it would deploy 1,000 context engineers over the next year to industrialise agentic AI across enterprises.

According to an official release, the company claimed that the move marks a “pivotal investment” in the emerging discipline of context engineering. 

As part of this initiative, Cognizant said it is partnering with Workfabric AI, the company building the context engine for enterprise AI. 

Cognizant’s context engineers will be powered by Workfabric AI’s ContextFabric platform, the statement said, adding that the platform transforms the organisational DNA of enterprises, how their teams work, including their workflows, data, rules, and processes, into actionable context for AI agents.Context engineering is essential to enabling AI a

Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.



Source link

Continue Reading

Trending