Jobs & Careers
Claude Now Available in Xcode for Developers

Anthropic has made Claude generally available in Xcode 26, Apple’s flagship integrated development environment (IDE). The new integration brings Claude Sonnet 4 directly into developers’ workflows, enabling AI-powered coding intelligence features for building, testing and distributing apps across Apple platforms.
With this release, developers can connect their Claude account to Xcode and access an assistant that interacts with code using natural language. The system automatically gathers context from the project, retains conversation history and supports file attachments, helping teams debug issues, refactor large sections and quickly build new features.
Beyond assistance, Claude introduces coding tools designed to streamline development tasks. These include generating documentation, providing explanations for specific code segments and producing SwiftUI previews and playgrounds. Developers can also make inline code changes directly within the editor, reducing the need to switch between tools.
The availability of Claude in Xcode ties into existing subscription plans. Usage limits are shared across platforms, with a portion allocated for Xcode integration. The feature is available to users on Pro and Max plans, as well as Team and Enterprise customers with premium seats that include Claude Code.
To get started, developers need to download Xcode 26 from the Mac App Store, navigate to Intelligence settings in preferences and sign in with their Claude account. Once enabled, the integration brings Anthropic’s AI capabilities to Apple’s development ecosystem, making Xcode a more powerful environment for both individual programmers and larger teams.
The move signals a growing focus on embedding AI coding assistants directly into mainstream development environments, just like Copilot for VS Code users and Gemini CLI in Zed.
Jobs & Careers
The Lazy Data Scientist’s Guide to Time Series Forecasting


Image by Editor | ChatGPT
# Introduction
Time series forecasting is everywhere in business. Whether you’re predicting sales for next quarter, estimating inventory demand, or planning financial budgets, accurate forecasts can make — or break — strategic decisions.
However, classical time series approaches — like painstaking ARIMA tuning — are complicated and time-consuming.
This presents a dilemma for many data scientists, analysts, and BI professionals: precision versus practicality.
That’s where a lazy data scientist’s mindset comes in. Why spend weeks fine-tuning models when modern Python forecasting libraries and AutoML can give you an adequate solution in less than a minute?
In this guide, you’ll learn how to adopt an automated forecasting approach that delivers fast, reasonable accuracy — without guilt.
# What Is Time Series Forecasting?
Time series forecasting refers to the process of predicting future values derived from a sequence of historical data. Common applications include sales, energy demand, finance, and weather, among others.
Four key concepts drive time series:
- Trend: the long-term tendency, shown by increases or decreases over an extended period.
- Seasonality: patterns that repeat regularly within a year (daily, weekly, monthly) and are associated with the calendar.
- Cyclical: repeating movements or oscillations lasting more than a year, often driven by macroeconomic conditions.
- Irregular or noise: random fluctuations we cannot explain.
To further understand time series, see this Guide to Time Series with Pandas.


Image by Author
# The Lazy Approach to Forecasting
The “lazy” approach is simple: stop reinventing the wheel. Instead, rely on automation and pre-built models to save time.
This approach prioritizes speed and practicality over perfect fine-tuning. Consider it like using Google Maps: you arrive at the destination without worrying about how the system calculates every road and traffic condition.
# Essential Tools for Lazy Forecasting
Now that we have established what the lazy approach looks like, let’s put it into practice. Rather than developing models from the ground up, you can leverage well-tested Python libraries and AutoML frameworks that will do most of the work for you.
Some libraries, like Prophet and Auto ARIMA, are great for plug-and-play forecasting with very little tuning, while others, like sktime and Darts, provide an ecosystem with great versatility where you can do everything from classical statistics to deep learning.
Let’s break them down:
// Facebook Prophet
Prophet is a plug-and-play library created by Facebook (Meta) that’s especially good at capturing trends and seasonality in business data. With just a few lines of code, you can produce forecasts that include uncertainty intervals, with no heavy parameter tuning required.
Here is a sample code snippet:
from prophet import Prophet
import pandas as pd
# Load data (columns: ds = date, y = value)
df = pd.read_csv("sales.csv", parse_dates=["ds"])
# Fit a simple Prophet model
model = Prophet()
model.fit(df)
# Make future predictions
future = model.make_future_dataframe(periods=30)
forecast = model.predict(future)
# Plot forecast
model.plot(forecast)
// Auto ARIMA (pmdarima)
ARIMA models are a traditional approach for time-series predictions; however, tuning their parameters (p
, d
, q
) takes time. Auto ARIMA in the pmdarima library automates this selection, so you can obtain a reliable baseline forecast without guesswork.
Here is some code to get started:
import pmdarima as pm
import pandas as pd
# Load time series (single column with values)
df = pd.read_csv("sales.csv")
y = df["y"]
# Fit Auto ARIMA (monthly seasonality example)
model = pm.auto_arima(y, seasonal=True, m=12)
# Forecast next 30 steps
forecast = model.predict(n_periods=30)
print(forecast)
// Sktime and Darts
If you want to go beyond classical methods, libraries like sktime and Darts give you a playground to test dozens of models: from simple ARIMA to advanced deep learning forecasters.
They’re great for experimenting with machine learning for time series without needing to code everything from scratch.
Here is a simple code example to get started:
from darts.datasets import AirPassengersDataset
from darts.models import ExponentialSmoothing
# Load example dataset
series = AirPassengersDataset().load()
# Fit a simple model
model = ExponentialSmoothing()
model.fit(series)
# Forecast 12 future values
forecast = model.predict(12)
series.plot(label="actual")
forecast.plot(label="forecast")
// AutoML Platforms (H2O, AutoGluon, Azure AutoML)
In an enterprise environment, there are moments when you simply want forecasts without having to code and with as much automation as possible.
AutoML platforms like H2O AutoML, AutoGluon, or Azure AutoML can ingest raw time series data, test several models, and deliver the best-performing model.
Here is a quick example using AutoGluon:
from autogluon.timeseries import TimeSeriesPredictor
import pandas as pd
# Load dataset (must include columns: item_id, timestamp, target)
train_data = pd.read_csv("sales_multiseries.csv")
# Fit AutoGluon Time Series Predictor
predictor = TimeSeriesPredictor(
prediction_length=12,
path="autogluon_forecasts"
).fit(train_data)
# Generate forecasts for the same series
forecasts = predictor.predict(train_data)
print(forecasts)
# When “Lazy” Isn’t Enough
Automated forecasting works very well most of the time. However, you should always keep in mind:
- Domain complexity: when you have promotions, holidays, or pricing changes, you may need custom features.
- Unusual circumstances: pandemics, supply chain shocks, and other rare events.
- Mission-critical accuracy: for high-stakes scenarios (finance, healthcare, etc.), you will want to be fastidious.
“Lazy” does not mean careless. Always sanity-check your predictions before using them in business decisions.
# Best Practices for Lazy Forecasting
Even if you’re taking the lazy way out, follow these tips:
- Always visualize forecasts and confidence intervals.
- Compare against simple baselines (last value, moving average).
- Automate retraining with pipelines (Airflow, Prefect).
- Save models and reports to ensure reproducibility.
# Wrapping Up
Time series forecasting does not need to be scary — or exhaustive.
You can get accurate, interpretable forecasts in minutes with Python forecasting libraries like Prophet or Auto ARIMA, as well as AutoML frameworks.
So remember: being a “lazy” data scientist does not mean you are careless; it means you are being efficient.
Josep Ferrer is an analytics engineer from Barcelona. He graduated in physics engineering and is currently working in the data science field applied to human mobility. He is a part-time content creator focused on data science and technology. Josep writes on all things AI, covering the application of the ongoing explosion in the field.
Jobs & Careers
How to Build Production-Ready UI Prototypes in Minutes Using Google Stitch


# Introduction
What usually happens during app development is that you and your team finalize a design, go through development and testing, and when it’s finally time to review what you’ve been building for weeks, it just feels off. It either looks underdeveloped or simply “not right.” Too many iterations, too much time, and somewhere along the way, the original idea gets lost.
To solve this, Google introduced a new tool called Google Stitch. You just give simple English commands and get a beautiful, responsive, production-ready prototype you can deploy. And the best part? It’s free to try at stitch.withgoogle.com. In this article, I’ll walk you through my experience using Google Stitch and teach you how to get started too.
# Getting Started with Google Stitch
Head over to stitch.withgoogle.com and sign in with your Google account before starting.
// 1. Choose Your Mode
From the top-right corner of the screen, you can switch between the following modes:
- Standard Mode (Gemini 2.5 Flash): Best for quick drafts and MVPs.
- Experimental Mode (Gemini 2.5 Pro): Lets you generate UI from image inputs, wireframes, or sketches that you can upload for inspiration.
// 2. Describe or Upload
You’ll see a prompt box with a canvas next to it.
- Text prompt example: “A signup form page with logo at top, email/password fields, submit button in primary color.”
- Image prompt (Experimental): Upload a wireframe or screenshot to guide the generation alongside your text.
For instance, I used Standard Mode and gave the following prompt:
“Quiz page in a language learning app with a progress bar at the top. The title challenges you to match an Urdu word with the correct answer, offering four possible options.”
It generated an amazing UI based on this:
// 3. Preview and Tweak
Use the sidebar to adjust themes: change color palettes, fonts, border radius, and switch between dark and light mode. Google Stitch also lets you modify designs using updated prompts.
For example, I updated the theme to dark and changed the font to Manrope. After clicking Apply Theme, the output looked even more polished:


// 4. Iterate Smartly
You can refine individual components or screens. For example:
- “Make the primary button bigger and blue.”
- “Add a navigation bar to the top of the homepage.”
Stitch follows your step-by-step instructions very accurately. In my case, the Urdu words initially appeared as romanized text. So I updated the prompt:
“Display the Urdu word in a right-to-left direction with a large Nastaliq-style font, centered on the screen.”
The result was genuinely impressive:


// 5. Export and Build
You can click on the generated image to copy the code, or hit Paste to Figma to drop editable, auto-layout artboards directly into your design workspace.
Here’s what showed up when I clicked on the image and selected Copy Code. It was instantly ready to integrate into a dev environment or a design file.
# Final Thoughts and Getting the Best Results
Although it’s not a complete design solution, Google Stitch is highly recommended for MVPs and early-stage development. You can always export to Figma for advanced design customization or to build multi-screen logic.
Here are a few tips for getting better results:
- Use UI-specific language like “navbar,” “dashboard widgets,” “primary button,” or “auto-spacing” to guide the structure more accurately.
- Start with a high-level description and refine it step-by-step. For example: “fitness tracking app.”
- Be very specific when editing. Mention elements clearly, such as “change the color of the primary button on the signup form to white.”
Google Stitch is fast, intuitive, and gives you a great starting point when you need working prototypes without getting stuck in weeks of back-and-forth. Definitely worth trying.
Kanwal Mehreen is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.
Jobs & Careers
7 Free Web Search APIs for AI Agents


Image by Editor | ChatGPT
# Introduction
AI agents are only as effective as their access to fresh, reliable information. Behind the scenes, many agents use web search tools to pull the latest context and ensure their outputs remain relevant. However, not all search APIs are created equal, and not every option will fit seamlessly into your stack or workflow.
In this article, we review the top 7 web search APIs that you can integrate into your agent workflows. For each API, you will find example Python code to help you get started quickly. Best of all, every API we cover offers a free (though limited) tier, allowing you to experiment without needing to enter a credit card or encounter additional hurdles.
1. Firecrawl
Firecrawl provides a dedicated Search API built “for AI,” alongside its crawl/scrape stack. You can choose your output format: clean Markdown, raw HTML, link lists, or screenshots, so the data fits your downstream workflow. It also supports customizable search parameters (e.g. language and country) to target results by locale, and is built for AI agents that need web data at scale.
Installation: pip install firecrawl-py
from firecrawl import Firecrawl
firecrawl = Firecrawl(api_key="fc-YOUR-API-KEY")
results = firecrawl.search(
query="KDnuggets",
limit=3,
)
print(results)
2. Tavily
Tavily is a search engine for AI agents and LLMs that turns queries into vetted, LLM-ready insights in a single API call. Instead of returning raw links and noisy snippets, Tavily aggregates up to 20 sources, then uses proprietary AI to score, filter, and rank the most relevant content for your task, reducing the need for custom scraping and post-processing.
Installation: pip install tavily-python
from tavily import TavilyClient
tavily_client = TavilyClient(api_key="tvly-YOUR_API_KEY")
response = tavily_client.search("Who is MLK?")
print(response)
3. Exa
Exa is an innovative, AI-native search engine that offers four modes: Auto, Fast, Keyword, and Neural. These modes effectively balance precision, speed, and semantic understanding. Built on its own high-quality web index, Exa uses embeddings-powered “next-link prediction” in its Neural search. This feature surfaces links based on meaning rather than exact words, making it particularly effective for exploratory queries and complex, layered filters.
Installation: pip install exa_py
from exa_py import Exa
import os
exa = Exa(os.getenv('EXA_API_KEY'))
result = exa.search(
"hottest AI medical startups",
num_results=2
)
4. Serper.dev
Serper is a fast and cost-effective Google SERP (Search Engine Results Page) API that delivers results in just 1 to 2 seconds. It supports all major Google verticals in one API, including Search, Images, News, Maps, Places, Videos, Shopping, Scholar, Patents, and Autocomplete. It provides structured SERP data, enabling you to build real-time search features without the need for scraping. Serper lets you get started instantly with 2,500 free search queries, no credit card required.
Installation: pip install --upgrade --quiet langchain-community langchain-openai
import os
import pprint
os.environ["SERPER_API_KEY"] = "your-serper-api-key"
from langchain_community.utilities import GoogleSerperAPIWrapper
search = GoogleSerperAPIWrapper()
search.run("Top 5 programming languages in 2025")
5. SerpAPI
SerpApi offers a powerful Google Search API, along with support for additional search engines, delivering structured Search Engine Results Page data. It features robust infrastructure, including global IPs, a complete browser cluster, and CAPTCHA solving to ensure reliable and accurate results. Additionally, SerpApi provides advanced parameters, such as precise location controls through the location parameter and a /locations.json helper.
Installation: pip install google-search-results
from serpapi import GoogleSearch
params = {
"engine": "google_news", # use Google News engine
"q": "Artificial Intelligence", # search query
"hl": "en", # language
"gl": "us", # country
"api_key": "secret_api_key" # replace with your SerpAPI key
}
search = GoogleSearch(params)
results = search.get_dict()
# Print top 5 news results with title + link
for idx, article in enumerate(results.get("news_results", []), start=1):
print(f"{idx}. {article['title']} - {article['link']}")
6. SearchApi
SearchApi offers real-time SERP scraping across many engines and verticals, exposing Google Web along with specialized endpoints such as Google News, Scholar, Autocomplete, Lens, Finance, Patents, Jobs, and Events, plus non-Google sources like Amazon, Bing, Baidu, and Google Play; this breadth lets agents target the right vertical while keeping a single JSON schema and consistent integration path.
import requests
url = "https://www.searchapi.io/api/v1/search"
params = {
"engine": "google_maps",
"q": "best sushi restaurants in New York"
}
response = requests.get(url, params=params)
print(response.text)
7. Brave Search
Brave Search offers a privacy-first API on an independent web index, with endpoints for web, news, and images that work well for grounding LLMs without user tracking. It’s developer-friendly, performant, and includes a free usage plan.
import requests
url = "https://api.search.brave.com/res/v1/web/search"
headers = {
"Accept": "application/json",
"Accept-Encoding": "gzip",
"X-Subscription-Token": ""
}
params = {
"q": "greek restaurants in san francisco"
}
response = requests.get(url, headers=headers, params=params)
if response.status_code == 200:
data = response.json()
print(data)
else:
print(f"Error {response.status_code}: {response.text}")
Wrapping Up
I pair search APIs with Cursor IDE through MCP Search to pull fresh documentation right inside my editor, which speeds up debugging and improves my programming flow. These tools power real-time web applications, agentic RAG workflows, and more, while keeping outputs grounded and reducing hallucinations in sensitive scenarios.
Key advantages:
- Customization for precise queries, including filters, freshness windows, region, and language
- Flexible output formats like JSON, Markdown, or plaintext for seamless agent handoffs
- The option to search and scrape the web to enrich context for your AI agents
- Free tiers and affordable usage-based pricing so you can experiment and scale without worry
Pick the API that matches your stack, latency needs, content coverage, and budget. If you need a place to start, I highly recommend Firecrawl and Tavily. I use both almost every day.
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master’s degree in technology management and a bachelor’s degree in telecommunication engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries