Jobs & Careers
New Microsoft AI Model Brings 10x Speed to Reasoning on Edge Devices, Apps
Microsoft has released Phi-4-mini-flash-reasoning, a compact AI model engineered for fast, on-device logical reasoning. This new addition to the Phi family is designed for low-latency environments, such as mobile apps and edge deployments, offering performance improvements of up to 10 times throughput and two to three times lower latency compared to its predecessor.
The 3.8 billion parameter open model maintains support for a 64k token context length and is fine-tuned on high-quality synthetic data for structured, math-focused reasoning tasks. Unlike earlier Phi models, Phi-4-mini-flash-reasoning introduces a new “decoder-hybrid-decoder” architecture called SambaY, combining state-space models (Mamba), sliding window attention, and a novel Gated Memory Unit (GMU) to reduce decoding complexity and boost long-context performance.
According to Microsoft, this setup allows the model to maintain linear prefill computation time while interleaving lightweight GMUs with expensive attention layers. The result is significantly improved inference efficiency, making it viable for use on a single GPU or in latency-sensitive deployments, such as real-time tutoring tools and adaptive learning apps.
Benchmarks shared by Microsoft indicate that Phi-4-mini-flash-reasoning outperforms models twice its size on tasks such as AIME24/25 and Math500, while maintaining faster response times on the vLLM inference framework.
The release aligns with Microsoft’s broader push for responsible AI, with safety mechanisms including supervised fine-tuning (SFT), direct preference optimisation (DPO), and reinforcement learning from human feedback (RLHF). The company notes that all Phi models follow its core principles of transparency, privacy, and inclusiveness.
The model is already available through Azure AI Foundry, Hugging Face, and the NVIDIA API Catalogue. For more technical details, one can explore the research paper and the Phi Cookbook for developers.
Recently, Hugging Face also introduced SmolLM3, a 3B-parameter open language model featuring long-context reasoning (up to 128k tokens), multilingual support (six languages), and dual inference modes. Trained on 11.2 trillion tokens, it outperforms peers like Llama-3.2-3B and Qwen2.5-3B, while competing with larger 4B models such as Gemma3 and Qwen3.
With advancements in small language models and companies adding reasoning capabilities, it should translate to better on-device AI performance compared to that of an AI model with more parameters.
Jobs & Careers
Automate SQL Workflows with n8n: Scheduled Database Reports via Email
Image by Author | ChatGPT
The Hidden Cost of Routine SQL Reporting
Data teams across organizations face the same recurring challenge: stakeholders require regular reports, but manual SQL reporting consumes valuable time that could be spent on analysis. The process remains consistent regardless of company size — connect to the database, execute queries, format results, and distribute findings to decision-makers.
Data professionals routinely handle reporting tasks that don’t require advanced statistical knowledge or domain expertise, yet they consume significant time through repetitive execution of the same queries and formatting procedures.
This workflow addresses a fundamental efficiency problem: transforming one-time setup into ongoing automated delivery of professional reports directly to stakeholder inboxes.
The Solution: A 4-Node Automated Reporting Pipeline
Building on our previous n8n exploration, this workflow tackles a different automation challenge: scheduled SQL reporting. While our first tutorial focused on data quality analysis, this one demonstrates how n8n handles database integration, recurring schedules, and email distribution.
Unlike writing standalone Python scripts for reporting, n8n workflows are visual, reusable, and easy to modify. You can connect databases, perform transformations, run analyses, and deliver results — all without switching between different tools or environments. Each workflow consists of “nodes” that represent different actions, connected together to create an automated pipeline.
Our automated SQL reporter consists of four connected nodes that transform manual reporting into a hands-off process:
- Schedule Trigger – Runs every Monday at 9 AM
- PostgreSQL Node – Executes sales query against database
- Code Node – Transforms raw data into formatted HTML report
- Send Email Node – Delivers professional report to stakeholders
Building the Workflow: Step-by-Step Implementation
Prerequisites
Step 1: Set Up Your PostgreSQL Database
We’ll create a realistic sales database using Supabase for this tutorial. Supabase is a cloud-based PostgreSQL platform that provides managed databases with built-in APIs and authentication—making it ideal for rapid prototyping and production applications. While this tutorial uses Supabase for convenience, the n8n workflow connects to any PostgreSQL database, including AWS RDS, Google Cloud SQL, or your organization’s existing database infrastructure.
Create Supabase Account:
- Visit supabase.com and sign up for free
- Create new project – choose any name and region
- Wait for setup – takes about 2 minutes for database provisioning
- View your connection details from the Settings > Database page (or the “connect” button on the main page)
Load Sample Data:
Navigate to the SQL Editor in Supabase and run this setup script to create our sales database tables and populate them with sample data:
-- Create employees table
CREATE TABLE employees (
emp_id SERIAL PRIMARY KEY,
first_name VARCHAR(50),
last_name VARCHAR(50),
department VARCHAR(50)
);
-- Create sales table
CREATE TABLE sales (
sale_id SERIAL PRIMARY KEY,
emp_id INTEGER REFERENCES employees(emp_id),
sale_amount DECIMAL(10,2),
sale_date DATE
);
-- Insert sample employees
INSERT INTO employees (first_name, last_name, department) VALUES
('Mike', 'Johnson', 'Sales'),
('John', 'Doe', 'Sales'),
('Tom', 'Wilson', 'Sales'),
('Sarah', 'Chen', 'Marketing');
-- Insert recent sales data
INSERT INTO sales (emp_id, sale_amount, sale_date) VALUES
(1, 2500.00, CURRENT_DATE - 2),
(1, 1550.00, CURRENT_DATE - 5),
(2, 890.00, CURRENT_DATE - 1),
(2, 1500.00, CURRENT_DATE - 4),
(3, 3200.00, CURRENT_DATE - 3),
(4, 1200.00, CURRENT_DATE - 6);
Paste this entire script into the SQL Editor and click the “Run” button in the bottom-right corner. You should see “Success. No rows returned” confirming that your tables and sample data have been created successfully.
Test Your Connection:
Within the same SQL Editor, run a fresh query to verify everything works: SELECT COUNT(*) FROM employees;
You should see 4 employees in the results.
Step 2: Configure Gmail for Automated Sending
Enable App Password:
- Turn on 2-step verification in your Google Account settings
- Generate app password – go to Security > App passwords
- Select “Mail” and “Other” – name it “n8n reporting”
- Copy the 16-character password – you’ll need this for n8n
Step 3: Import and Configure the Workflow
Import the Template:
- Download the workflow file
- Open n8n and click “Import from File”
- Select the downloaded file – all four nodes appear automatically
- Save the workflow as “Automated SQL Reporting”
The imported workflow contains four connected nodes with all the complex SQL and formatting code already configured.
Configure Database Connection:
- Click the PostgreSQL node
- Get your connection details from Supabase by clicking the “Connect” button on your main page. For n8n integration, use the “Transaction pooler” connection string as it’s optimized for automated workflows:
- Create new credential with your Supabase details:
- Host: [your-project].supabase.com
- Database: postgres
- User: postgres…..
- Password: [from Supabase settings]
- Port: 6543
- SSL: Enable
- Test connection – you should see a green success message
Configure Email Settings:
- Click the Send Email node
- Create SMTP credential:
- Host: smtp.gmail.com
- Port: 587
- User: your-email@gmail.com
- Password: [your app password]
- Secure: Enable STARTTLS
- Update recipient in the “To Email” field
That’s it! The analysis logic automatically adapts to different database schemas, table names, and data types.
Step 4: Test and Deploy
- Click “Execute Workflow” in the toolbar
- Watch each node turn green as it processes
- Check your email – you should receive the formatted report
- Toggle to “Active” to enable Monday morning automation
Once the setup is complete, you’ll receive automatic weekly reports without any manual intervention.
Understanding Your Automated Report
Here’s what your stakeholders will receive every Monday:
Email Subject: 📊 Weekly Sales Report – June 27, 2025
Report Content:
- Clean HTML table with proper styling and borders
- Summary statistics calculated automatically from SQL results
- Professional formatting suitable for executive stakeholders
- Timestamp and metadata for audit trails
Here’s what the final report looks like:
The workflow automatically handles all the complex formatting and calculations behind this professional output. Notice how the report includes proper currency formatting, calculated averages, and clean table styling—all generated directly from raw SQL results without any manual intervention. The email arrives with a timestamp, making it easy for stakeholders to track reporting periods and maintain audit trails for decision-making processes.
Technical Deep Dive: Understanding the Implementation
Schedule Trigger Configuration:
The workflow runs every Monday at 9:00 AM using n8n’s interval scheduling. This timing ensures reports arrive before weekly team meetings.
SQL Query Logic:
The PostgreSQL node executes a sophisticated query with JOINs, date filtering, aggregations, and proper numeric formatting. It automatically:
- Joins employee and sales tables for complete records
- Filters data to last 7 days using
CURRENT_DATE - INTERVAL '7 days'
- Calculates total sales, revenue, and averages per person
- Orders results by revenue for business prioritization
HTML Generation Logic:
The Code node transforms SQL results into professional HTML using JavaScript. It iterates through query results, builds styled HTML tables with consistent formatting, calculates summary statistics, and adds professional touches like emojis and timestamps.
Email Delivery:
The Send Email node uses Gmail’s SMTP service with proper authentication and HTML rendering support.
Testing with Different Scenarios
To see how the workflow handles varying data patterns, try these modifications:
- Different Time Periods: Change
INTERVAL '7 days'
toINTERVAL '30 days'
for monthly reports - Department Filtering: Add
WHERE e.department="Sales"
for team-specific reports - Different Metrics: Modify SELECT clause to include product categories or customer segments
Based on your business needs, you can determine next steps: weekly reports work well for operational teams, monthly reports suit strategic planning, quarterly reports serve executive dashboards, and daily reports help with real-time monitoring. The workflow adapts automatically to any SQL structure, allowing you to quickly create multiple reporting pipelines for different stakeholders.
Next Steps
1. Multi-Database Support
Replace the PostgreSQL node with MySQL, SQL Server, or any supported database. The workflow logic remains identical while connecting to different data sources. This flexibility makes the solution valuable across diverse technology stacks.
2. Advanced Scheduling
Modify the Schedule Trigger for different frequencies. Set up daily reports for operational metrics, monthly reports for strategic planning, or quarterly reports for board meetings. Each schedule can target different recipient groups with tailored content.
3. Enhanced Formatting
Extend the Code node to include charts and visualizations using Chart.js, conditional formatting based on performance thresholds, or executive summaries with key insights. The HTML output supports rich formatting and embedded graphics.
4. Multi-Recipient Distribution
Add logic to send different reports to different stakeholders. Sales managers receive individual team reports, executives receive high-level summaries, and finance teams receive revenue-focused metrics. This targeted approach ensures each audience gets relevant information.
Conclusion
This automated SQL reporting workflow demonstrates how n8n bridges the gap between data science expertise and operational efficiency. By combining database integration, scheduling, and email automation, you can eliminate routine reporting tasks while delivering professional results to stakeholders.
The workflow’s modular design makes it particularly valuable for data teams managing multiple reporting requirements. You can duplicate the workflow for different databases, modify the SQL queries for various metrics, and adjust the formatting for different audiences—all without writing custom scripts or managing server infrastructure.
Unlike traditional ETL tools that require extensive configuration, n8n’s visual interface makes complex data workflows accessible to both technical and non-technical team members. Your SQL expertise remains the core value, while n8n handles the automation infrastructure, scheduling reliability, and delivery mechanisms.
Most importantly, this approach scales with your organization’s needs. Start with simple weekly reports, then expand to include data visualizations, multi-database queries, or integration with business intelligence platforms. The foundation you build today becomes the automated reporting infrastructure that supports your team’s growth tomorrow.
Born in India and raised in Japan, Vinod brings a global perspective to data science and machine learning education. He bridges the gap between emerging AI technologies and practical implementation for working professionals. Vinod focuses on creating accessible learning pathways for complex topics like agentic AI, performance optimization, and AI engineering. He focuses on practical machine learning implementations and mentoring the next generation of data professionals through live sessions and personalized guidance.
Jobs & Careers
Common Sense, Not AI, Could Have Prevented RCB Stampede: K’taka IT Minister

In the aftermath of the tragic stampede linked to Royal Challengers Bengaluru (RCB) celebrations that left 11 dead, Karnataka IT Minister Priyank Kharge said the situation called for common sense, not artificial intelligence.
“I think common sense would have helped there rather than AI,” said Kharge, addressing the issue of crowd and traffic mismanagement in a video podcast with AIM. Taking accountability, he said, “On that particular day, I think all of us could have done better.”
Kharge said the responsibility was collective and not limited to one entity. He pointed out that BCCI, IPL officials, the government, the home department, the RCB team, and fans could have all performed better in their respective roles.
Responding to whether AI could have helped manage the crowd flow, Kharge admitted the technology might have flagged high-density areas, but the ultimate call needed to be taken by people on the ground. “AI is going to tell us that there’s too much of a crowd and there needs management. The management has to happen physically,” he said.
Calling it an unfortunate incident, Kharge said the government has accepted responsibility and is taking steps to prevent such tragedies from occurring in the future.
“We’ve already owned up to it, and we’re working towards it. Probably we’ll be getting a crowd management bill as well, where the use of such technologies will be advocated,” the minister said. The Central Administrative Tribunal (CAT) has held RCB “prima facie responsible” for the incident, citing inadequate preparation and sudden announcements.
Earlier this week, however, RCB has formally challenged this. The cricketing team argues that the CAT made these comments without giving the team a chance to present its side, violating principles of natural justice.
Jobs & Careers
Musk Claims that Grok-4 is Better Than Cursor
Elon Musk led xAI has launched Grok 4 and Grok 4 Heavy, two versions of its latest AI model. The company described Grok 4 as a single-agent version, while Grok 4 Heavy is the multi-agent version.
CEO Musk took to X to claim that Grok-4 outperforms the AI coding tool Cursor.
“You can cut & paste your entire source code file into the query entry box on grok.com and Grok 4 will fix it for you!” he said in a post on X. “This is what everyone at xAI does. Works better than Cursor,” he added.
As announced by Musk, the company also plans to launch a coding model focused on being “both fast and smart,” expected to be released in a few weeks.
Both models are available immediately and come bundled with access to Super Grok tiers, where users can direct a network of Grok agents to assist with research and productivity.
Grok 4 is also accessible via API, and xAI claims the model leads key reasoning benchmarks, including ARC-AGI-2.
Meanwhile, over the past few days, users have been cancelling their Cursor subscriptions en masse, voicing outrage over what they see as an abrupt and poorly communicated change, particularly to the company’s heavily promoted “unlimited” Pro plan.
Users claim that Cursor’s Pro plan, which once promised unlimited usage, has effectively been capped. “Cursor constantly changing their pricing without any proper announcement or documentation recently is a pretty bad and frustrating move for users,” said one user.
Cursor has issued a clarification, admitting they “missed the mark” with the updated pricing and are fully refunding affected users. But that hasn’t quelled the criticism, as Reddit threads are filled with complaints that go beyond pricing alone.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children