Jobs & Careers
Bengaluru Developer Builds ‘Mission Office’ iPhone Game with Potholes and Cows Using ChatGPT

A Bengaluru-based software developer has launched an iPhone game that captures the daily chaos of commuting in the city with the help of AI. Titled Mission Office, the game was created by Harin Nitisvaar using ChatGPT to generate assets, with final edits done in Photoshop.
Built using SwiftUI in Xcode, the game is now available for download on the iOS App Store.
Drawing from his own rides to work on an Ather electric scooter, Nitisvaar, who currently works at Zepto and formerly at Swiggy and Dunzo, designed the game to simulate the challenges faced by Bengaluru commuters.
Players dodge potholes, barricades, and cows, aiming to reach their office in one piece. Familiar city landmarks add to the immersive, cartoon-style gameplay.
“Currently, the player image is AI-generated, and I can even create your avatar driving your favorite vehicle,” Nitisvaar replied to a post asking if players can customise the vehicle to other EVs like Ultraviolette, Ola, or Chetak.
A version 2.0 is in the works, which will add traffic jams and urban floods to further simulate the Bengaluru commute.
The game’s clever integration of real-world elements and its humorous tone have struck a chord on social media. Users on X (formerly Twitter) have praised its relatability, with some suggesting new features like auto-rickshaws and regional language dialogues.
Nitisvaar’s approach stands out for its low-cost development process powered by AI tools, showing how generative models can help solo developers create visually rich games.
While the game is available only on iOS for now, there’s no word yet on an Android release.
Jobs & Careers
Creating Slick Data Dashboards with Python, Taipy & Google Sheets


Image by Author | Ideogram
# Introduction
Data has become a vital resource for any business, as it provides a means for companies to gain valuable insights, particularly when making decisions. Without data, decisions rely solely on instinct and luck, which is not the most effective approach.
However, vast amounts of raw data are difficult to understand. It provides no direct insights and requires further processing. This is why many people rely on using data dashboards to summarize, visualize, and navigate the raw data we have. By developing a sleek dashboard, we can provide a straightforward way for non-technical users to easily gain insights from data.
That’s why this article will explore how to create a sleek data dashboard by leveraging Python, Taipy, and Google Sheets.
Let’s get into it.
# Developing a Slick Data Dashboard
We will start the tutorial by preparing all the necessary credentials to access Google Sheets via Python. First, create a Google account and navigate to the Google Cloud Console. Then, navigate to APIs & Services > Library, where you need to enable the Google Sheets API and Google Drive API.
After enabling the APIs, return to APIs & Services > Credentials and navigate to Create Credential > Service Account. Follow the directions and assign the role, such as Editor or Owner, so that we can read and write to Google Sheets. Select the service account we just created, then navigate to Keys > Add Key > Create New Key. Select JSON and download the credentials.json
file. Store it somewhere and open the file; then, copy the email value under client_email
.
For the dataset, we will use the cardiac dataset from Kaggle as an example. Store the file in Google Drive and open it as Google Sheets. In the Google Sheets file, go to the File > Share button and add the email you just copied. Lastly, copy the URL for the Google Sheets file, as we will access the data later via the URL.
Open your favorite IDE, and then we will structure our project as follows:
taipy_gsheet/
│
├── config/
│ └── credentials.json
├── app.py
└── requirements.txt
Create all the necessary files, and then we will start developing our dashboard. We will be using Taipy for the application framework, pandas for data manipulation, gspread and oauth2client for interacting with the Google Sheets API, and plotly for creating visualizations. In the requirements.txt
file, add the following packages:
taipy
pandas
gspread
oauth2client
plotly
These are the necessary libraries for our tutorial, and we will install them in our environment. Don’t forget to use a virtual environment to prevent breaking your main environment. We will also use Python 3.12; as of the time this article was written, this is the Python version that currently works for the libraries above.
Install the libraries using the following command:
pip install -r requirements.txt
If the installation is successful, then we will prepare our application. In app.py
, we will build the code to set up our dashboard.
First, we will import all the necessary libraries that we will use for developing the application.
import pandas as pd
import gspread
import plotly.express as px
import taipy as tp
from taipy import Config
from taipy.gui import Gui
import taipy.gui.builder as tgb
Next, we will load the data from Google Sheets using the following code. Change the SHEET_URL
value with your actual data URL. Additionally, we will preprocess the data to ensure it works well.
SHEET_URL = "https://docs.google.com/spreadsheets/d/1Z4S3hnV3710OJi4yu5IG0ZB5w0q4pmNPKeYy8BTyM8A/"
client = gspread.service_account(filename="config/credentials.json")
df_raw = pd.DataFrame(client.open_by_url(SHEET_URL).get_worksheet(0).get_all_records())
df_raw["sex"] = pd.to_numeric(df_raw["sex"], errors="coerce").fillna(0).astype(int)
df_raw["sex_label"] = df_raw["sex"].map({0: "Female", 1: "Male"})
Then, we will prepare the dashboard with Taipy. Taipy is an open-source library for data-driven applications, covering both front-end and back-end development. Let’s use the library to build the data dashboard with the basic features we can use with Taipy.
In the code below, we will develop a scenario, which is a pipeline that the user can execute for what-if analysis. It’s essentially a framework for experimenting with various parameters that we can pass to the pipeline. For example, here is how we prepare a scenario for the average age with the input of the gender filter.
def compute_avg_age(filtered_df: pd.DataFrame, gender_filter: str) -> float:
data = (
filtered_df
if gender_filter == "All"
else filtered_df[filtered_df["sex_label"] == gender_filter]
)
return round(data["age"].mean(), 1) if not data.empty else 0
filtered_df_cfg = Config.configure_data_node("filtered_df")
gender_filter_cfg = Config.configure_data_node("gender_filter")
avg_age_cfg = Config.configure_data_node("avg_age")
task_cfg = Config.configure_task(
"compute_avg_age", compute_avg_age, [filtered_df_cfg, gender_filter_cfg], avg_age_cfg
)
scenario_cfg = Config.configure_scenario("cardiac_scenario", [task_cfg])
Config.export("config.toml")
We will revisit the scenario later, but let’s prepare the gender selection itself and its default state.
gender_lov = ["All", "Male", "Female"]
gender_selected = "All"
filtered_df = df_raw.copy()
pie_fig = px.pie()
box_fig = px.box()
avg_age = 0
Next, we will create the functions that update our variables and data visualizations when a user interacts with the dashboard, such as by selecting a gender or submitting a scenario.
def update_dash(state):
subset = (
df_raw if state.gender_selected == "All"
else df_raw[df_raw["sex_label"] == state.gender_selected]
)
state.filtered_df = subset
state.avg_age = round(subset["age"].mean(), 1) if not subset.empty else 0
state.pie_fig = px.pie(
subset.groupby("sex_label")["target"].count().reset_index(name="count"),
names="sex_label", values="count",
title=f"Target Count -- {state.gender_selected}"
)
state.box_fig = px.box(subset, x="sex_label", y="chol", title="Cholesterol by Gender")
def save_scenario(state):
state.scenario.filtered_df.write(state.filtered_df)
state.scenario.gender_filter.write(state.gender_selected)
state.refresh("scenario")
tp.gui.notify(state, "s", "Scenario saved -- submit to compute!")
With the functions ready, we will prepare the front-end dashboard with a basic composition with the code below:
with tgb.Page() as page:
tgb.text("# Cardiac Arrest Dashboard")
tgb.selector(value="{gender_selected}", lov="{gender_lov}",
label="Select Gender:", on_change=update_dash)
with tgb.layout(columns="1 1", gap="20px"):
tgb.chart(figure="{pie_fig}")
tgb.chart(figure="{box_fig}")
tgb.text("### Average Age (Live): {avg_age}")
tgb.table(data="{filtered_df}", pagination=True)
tgb.text("---")
tgb.text("## Scenario Management")
tgb.scenario_selector("{scenario}")
tgb.selector(label="Scenario Gender:", lov="{gender_lov}",
value="{gender_selected}", on_change=save_scenario)
tgb.scenario("{scenario}")
tgb.scenario_dag("{scenario}")
tgb.text("**Avg Age (Scenario):**")
tgb.data_node("{scenario.avg_age}")
tgb.table(data="{filtered_df}", pagination=True)
The dashboard above is simple, but it will change according to the selections we make.
Lastly, we will prepare the orchestration process with the following code:
if __name__ == "__main__":
tp.Orchestrator().run()
scenario = tp.create_scenario(scenario_cfg)
scenario.filtered_df.write(df_raw)
scenario.gender_filter.write("All")
Gui(page).run(title="Cardiac Arrest Dashboard", dark_mode=True)
Once you have the code ready, we will run the dashboard with the following command:
Automatically, the dashboard will show up in your browser. For example, here is a simple cardiac arrest dashboard with the visualizations and the gender selection.
If you are scrolling down, here is how the scenario pipeline is shown. You can try to select the gender and submit the scenario to see the differences in the average age.
That’s how you can build a slick data dashboard with just a few components. Explore the Taipy documentation to add visualizations and features that are suitable for your dashboard needs.
# Wrapping Up
Data is a resource that every company needs, but gaining insights from the data is more difficult if it is not visualized. In this article, we have created a sleek data dashboard using Python, Taipy, and Google Sheets. We demonstrated how to connect to data from Google Sheets and utilize the Taipy library to construct an interactive dashboard.
I hope this has helped!
Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.
Jobs & Careers
What is Data Science in Simple Words?


Image by Editor | ChatGPT
# Introduction
“Data science”, “data scientist”, “data-driven systems and processes”, and so on…
Data is everywhere and has become a key element in every industry and business, as well as in our very lives. But with so many data-related terms and buzzwords, it is easy to get lost and lose track of what exactly each one means, especially one of the broadest concepts: data science. This article is intended to explain in simple terms what data science is (and what it isn’t), the knowledge areas it involves, common data science processes in the real world, and their impact.
# What is Data Science?
Data science is best described as a blended discipline that combines multiple knowledge areas (explained shortly). Its primary focus is on using and leveraging data to reveal patterns, answer questions, and support decisions — three critical aspects needed in virtually every business and organization today.
Take a retail firm, for instance: data science can help them find out best-selling products at certain seasons (patterns), explain why certain customers are leaving for competitors (questions), and how much inventory to stock for next winter (decisions). Since data is the core asset in any data science process, it is important to identify the relevant data sources. In this retail example, these sources could include purchase histories, customer behaviors and purchases, and sales numbers over time.


Data science example applied to the retail sector | Image generated by OpenAI and partly modified by the Author
So, what are the three key areas that, when blended together, form the scope of data science?
- Math and statistics, to analyze, measure, and understand the main properties of the data
- Computer science, to manage and process large datasets efficiently and effectively through software implementations of mathematical and statistical methods
- Domain knowledge, to ease the “real-world translation” of processes applied, understand requirements, and apply insights gained to the specific application domain: business, health, sports, etc.
Data science is a blended discipline that combines multiple knowledge areas.
# Real World Scope, Processes, and Impact
With so many related areas, like data analysis, data visualization, analytics, and even artificial intelligence (AI), it is important to demystify what data science isn’t. Data science is not limited to collecting, storing, and managing data in databases or performing shallow analyses, nor is it a magic wand that provides answers without domain knowledge and context. It is neither the same as artificial intelligence nor its most data-related subdomain: machine learning.
While AI and machine learning focus on building systems that mimic intelligence by learning from data, data science encompasses the comprehensive process of gathering, cleaning, exploring, and interpreting data to draw insights and guide decision-making. Thus, in simple terms, the essence of data science processes is to deeply analyze and understand data to connect it to the real-world problem at hand.
These activities are often framed as part of a data science lifecycle: a structured, cyclical workflow that typically moves from understanding the business problem to collecting and preparing data, analyzing and modeling it, and finally deploying and monitoring solutions. This ensures that data-driven projects remain practical, aligned with real needs, and continuously improved.
Data science impacts real-world processes in businesses and organizations in several ways:
- Revealing patterns in complex datasets, for instance, customer behavior and preferences over products
- Improving operational and strategic decision-making with insights driven from data, to optimize processes, reduce costs, etc.
- Predicting trends or events, e.g., future demand (the use of machine learning techniques as part of data science processes is common for this purpose)
- Personalizing user experience through products, content, and services, and adapting them to their preferences or needs
To broaden the picture, here are a couple of other domain examples:
- Healthcare: Predicting patient readmission rates, identifying disease outbreaks from public health data, or aiding drug discovery through the analysis of genetic sequences
- Finance: Detecting fraudulent credit card transactions in real time or building models to assess loan risk and creditworthiness
# Clarifying Related Roles
Beginners often find it confusing to distinguish between the many roles in the data space. While data science is broad, here’s a simple breakdown of some of the most common roles you’ll encounter:
- Data Analyst: Focuses on describing the past and present, often through reports, dashboards, and descriptive statistics to answer business questions
- Data Scientist: Works on prediction and inference, often building models and running experiments to forecast future outcomes and uncover hidden insights
- Machine Learning Engineer: Specializes in taking the models created by data scientists and deploying them into production, ensuring they run reliably and at scale
Role | Focus | Key Activities |
---|---|---|
Data Analyst | Describing the past and present | Creates reports and dashboards, uses descriptive statistics, and answers business questions with visualizations. |
Data Scientist | Prediction and inference | Builds machine learning models, experiments with data, forecasts future outcomes, and uncovers hidden insights. |
Machine Learning Engineer | Deploying and scaling models | Turns models into production-ready systems, ensures scalability and reliability, and monitors model performance over time. |
Understanding these distinctions helps cut through the buzzwords and makes it easier to see how the pieces fit together.
# Tools of the Trade
So, how do data scientists actually do their work? A key part of the story is the toolkit they rely on to accomplish their tasks.
Data scientists commonly use programming languages like Python and R. Popular libraries for Python (for example) include:
- Pandas for data manipulation
- Matplotlib and Seaborn for visualization
- Scikit-learn or PyTorch for building machine learning models
These tools lower the barrier to entry and make it possible to quickly move from raw data to actionable insights, without having to focus on building your own tools from scratch.
# Conclusion
Data science is a blended, multidisciplinary field that combines math, computer science, and domain expertise to reveal patterns, answer questions, and guide decisions. It isn’t the same as AI or machine learning, though those often play a part. Instead, it’s the structured, practical application of data to solve real-world problems and drive impact.
From retail to healthcare to finance, its applications are everywhere. Whether you’re just getting started or clarifying the buzzwords, understanding the scope, processes, and roles in data science provides a clear first step into this exciting field.
I hope you’ve enjoyed this concise, gentle introduction!
Iván Palomares Carrascosa is a leader, writer, speaker, and adviser in AI, machine learning, deep learning & LLMs. He trains and guides others in harnessing AI in the real world.
Jobs & Careers
5 Reasons Why Vibe Coding Threatens Secure Data App Development


Image by Author | ChatGPT
# Introduction
AI-generated code is everywhere. Since early 2025, “vibe coding” (letting AI write code from simple prompts) has exploded across data science teams. It’s fast, it’s accessible, and it’s creating a security disaster. Recent research from Veracode shows AI models pick insecure code patterns 45% of the time. For Java applications? That jumps to 72%. If you’re building data apps that handle sensitive information, these numbers should worry you.
AI coding promises speed and accessibility. But let’s be honest about what you’re trading for that convenience. Here are five reasons why vibe coding poses threats to secure data application development.
# 1. Your Code Learns From Broken Examples
The problem is, a majority of analyzed codebases contain at least one vulnerability, with many of them harboring high-risk flaws. When you use AI coding tools, you’re rolling the dice with patterns learned from this vulnerable code.
AI assistants can’t tell secure patterns from insecure ones. This leads to SQL injections, weak authentication, and exposed sensitive data. For data applications, this creates immediate risks where AI-generated database queries enable attacks against your most critical information.
# 2. Hardcoded Credentials and Secrets in Data Connections
AI code generators have a dangerous habit of hardcoding credentials directly in source code, creating a security nightmare for data applications that connect to databases, cloud services, and APIs containing sensitive information. This practice becomes catastrophic when these hardcoded secrets persist in version control history and can be discovered by attackers years later.
AI models often generate database connections with passwords, API keys, and connection strings embedded directly in application code rather than using secure configuration management. The convenience of having everything just work in AI-generated examples creates a false sense of security while leaving your most sensitive access credentials exposed to anyone with code repository access.
# 3. Missing Input Validation in Data Processing Pipelines
Data science applications frequently handle user inputs, file uploads, and API requests, yet AI-generated code consistently fails to implement proper input validation. This creates entry points for malicious data injection that can corrupt entire datasets or enable code execution attacks.
AI models may lack information about an application’s security requirements. They will produce code that accepts any filename without validation and enables path traversal attacks. This becomes dangerous in data pipelines where unvalidated inputs can corrupt entire datasets, bypass security controls, or allow attackers to access files outside the intended directory structure.
# 4. Inadequate Authentication and Authorization
AI-generated authentication systems often implement basic functionality without considering the security implications for data access control, creating weak points in your application’s security perimeter. Real cases have shown AI-generated code storing passwords using deprecated algorithms like MD5, implementing authentication without multi-factor authentication, and creating insufficient session management systems.
Data applications require solid access controls to protect sensitive datasets, but vibe coding frequently produces authentication systems that lack role-based access controls for data permissions. The AI’s training on older, simpler examples means it often suggests authentication patterns that were acceptable years ago but are now considered security anti-patterns.
# 5. False Security From Inadequate Testing
Perhaps the most dangerous aspect of vibe coding is the false sense of security it creates when applications appear to function correctly while harboring serious security flaws. AI-generated code often passes basic functionality tests while concealing vulnerabilities like logic flaws that affect business processes, race conditions in concurrent data processing, and subtle bugs that only appear under specific conditions.
The problem is exacerbated because teams using vibe coding may lack the technical expertise to identify these security issues, creating a dangerous gap between perceived security and actual security. Organizations become overconfident in their applications’ security posture based on successful functional testing, not realizing that security testing requires entirely different methodologies and expertise.
# Building Secure Data Applications in the Age of Vibe Coding
The rise of vibe coding doesn’t mean data science teams should abandon AI-assisted development entirely. GitHub Copilot increased task completion speed for both junior and senior developers, demonstrating clear productivity benefits when used responsibly.
But here’s what actually works: successful teams using AI coding tools implement multiple safeguards rather than hoping for the best. The key is to never deploy AI-generated code without a security review; use automated scanning tools to catch common vulnerabilities; implement proper secret management systems; establish strict input validation patterns; and never rely solely on functional testing for security validation.
Successful teams implement a multi-layered approach:
- Security-aware prompting that includes explicit security requirements in every AI interaction
- Automated security scanning with tools like OWASP ZAP and SonarQube integrated into CI/CD pipelines
- Human security review by security-trained developers for all AI-generated code
- Continuous monitoring with real-time threat detection
- Regular security training to keep teams current on AI coding risks
# Conclusion
Vibe coding represents a major shift in software development, but it comes with serious security risks for data applications. The convenience of natural language programming can’t override the need for security-by-design principles when handling sensitive data.
There has to be a human in the loop. If an application is fully vibe-coded by someone who cannot even review the code, they cannot determine whether it is secure. Data science teams must approach AI-assisted development with both enthusiasm and caution, embracing the productivity gains while never sacrificing security for speed.
The companies that figure out secure vibe coding practices today will be the ones that thrive tomorrow. Those that don’t may find themselves explaining security breaches instead of celebrating innovation.
Vinod Chugani was born in India and raised in Japan, and brings a global perspective to data science and machine learning education. He bridges the gap between emerging AI technologies and practical implementation for working professionals. Vinod focuses on creating accessible learning pathways for complex topics like agentic AI, performance optimization, and AI engineering. He focuses on practical machine learning implementations and mentoring the next generation of data professionals through live sessions and personalized guidance.
-
Business4 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics