Connect with us

Jobs & Careers

Microsoft Takes Its First Step to Make VS Code an Open-Source AI Editor

Published

on


Microsoft has taken its first concrete step towards making Visual Studio Code an open-source AI editor by open-sourcing the GitHub Copilot Chat extension under the MIT license. The company announced the milestone on June 30 via the VS Code team’s blog, calling it a move toward transparency, extensibility, and developer-centric AI tooling.

The newly open-sourced code reveals how Copilot Chat handles agent mode, context engineering, and telemetry. According to the blog post, “Everything, from our system prompts, implementation details, to the telemetry we capture, is available in all transparency.” Contributions and feedback from developers are welcome on GitHub, with the long-term goal of integrating this extension into the core VS Code codebase.

The announcement follows Microsoft CEO Satya Nadella’s keynote at Build 2025, where he confirmed the company’s commitment to AI-powered development. “This is a big deal. We will integrate these AI-powered capabilities directly into the core of VS Code, bringing them into the same open source repo that powers the world’s most loved dev tool,” said Nadella.

Erich Gamma, creator of VS Code, reinforced the motivation by highlighting that some organisations really don’t like closed-source IDEs, and for them, VS Code would be a great choice. The company will also open-source its prompt testing infrastructure to support third-party extension developers.

This strategic shift comes amid growing demand for openness in developer tooling. According to Microsoft, the rapid advancement in LLMs and the convergence of best practices across AI coding UIs have reduced the need for proprietary techniques.

While the GitHub Copilot extension for inline completions remains closed, Microsoft plans to bring that functionality into the open-sourced Chat extension in the coming months.

The move invites comparison with AI-first VS Code forks, such as Cursor and Windsurf, both valued in billions. “Is it just me or is it kinda funny that OpenAI bought Windsurf for $3B and then Microsoft just open-sourced Copilot,” quipped a user on X.

Whether it’s community-driven extensibility or agentic DevOps, Microsoft appears ready to reshape how developers interact with AI, on their own terms, and increasingly in the open.



Source link

Jobs & Careers

Fi.Money Launches Protocol to Connect Personal Finance Data with AI Assistants

Published

on


Fi.Money, a money management platform based in India, has launched what it says is the first consumer-facing implementation of a model context protocol (MCP) for personal finance. 

Fi MCP is designed to bring together users’ complete financial lives, including bank accounts, mutual funds, loans, insurance, EPF, real estate, gold, and more seamlessly into AI assistants of their choice, the company said in a statement.  

Users can choose to share this consolidated data with any AI tool, enabling private, intelligent conversations about their money, fully on their terms, it added. 

Until now, users have had to stitch together insights from various finance apps, statements, and spreadsheets. When turning to AI tools like ChatGPT or Gemini for advice, they’ve relied on manual inputs, guesswork, or generic prompts. 

There was no structured, secure, consent-driven way to help AI understand your actual financial data without sharing screenshots or uploading statements and reports.

The company said that with Fi’s new MCP feature, users can see their entire financial life in a single, unified view. 

This data can be privately exported in an AI-readable format or configured for near-real-time syncing with AI assistants. 

Once connected, users can ask personal, data-specific questions such as, “Can I afford a six-month career break?” or “What are the mistakes in my portfolio?” and receive context-aware responses based on their actual financial information.

As per the statement, the launch comes at a time when Indian consumers are increasingly seeking digital-first, integrated financial tools. Building on India’s pioneering digital infrastructure, Fi’s MCP represents the next layer of consumer-facing innovation, one that empowers consumers to activate their own data. 

Fi Money is the first in the world to let individuals use AI meaningfully with their own money, the company claimed. While most AIs lack context about one’s finances, Fi’s MCP changes that by giving users an AI that actually understands their money.

The Fi MCP is available to all Fi Money users. Any user can download the Fi Money app, consolidate their finances in a few minutes, and start using their data with their preferred AI assistant. 

“This is the first time any personal finance app globally has enabled users to securely connect their actual financial data with tools like ChatGPT, Gemini, or Claude,” Sujith Narayanan, co-founder of Fi.Money, said in the statement.

“With MCP, we’re giving users not just a dashboard, but a secure bridge between their financial data and the AI tools they trust. It’s about helping people ask better questions and get smarter answers about their money,” he added.



Source link

Continue Reading

Jobs & Careers

Large Language Models: A Self-Study Roadmap

Published

on


Image by Author | Canva

 

Large language models are a big step forward in artificial intelligence. They can predict and generate text that sounds like it was written by a human. LLMs learn the rules of language, like grammar and meaning, which allows them to perform many tasks. They can answer questions, summarize long texts, and even create stories. The growing need for automatically generated and organized content is driving the expansion of the large language model market. According to one report, Large Language Model (LLM) Market Size & Forecast:

“The global LLM Market is currently witnessing robust growth, with estimates indicating a substantial increase in market size. Projections suggest a notable expansion in market value, from USD 6.4 billion in 2024 to USD 36.1 billion by 2030, reflecting a substantial CAGR of 33.2% over the forecast period”

 

This means 2025 might be the best year to start learning LLMs. Learning advanced concepts of LLMs includes a structured, stepwise approach that includes concepts, models, training, and optimization as well as deployment and advanced retrieval methods. This roadmap presents a step-by-step method to gain expertise in LLMs. So, let’s get started.

 

Step 1: Cover the Fundamentals

 
You can skip this step if you already know the basics of programming, machine learning, and natural language processing. However, if you are new to these concepts consider learning them from the following resources:

  • Programming: You need to learn the basics of programming in Python, the most popular programming language for machine learning. These resources can help you learn Python:
  • Machine Learning: After you learn programming, you have to cover the basic concepts of machine learning before moving on with LLMs. The key here is to focus on concepts like supervised vs. unsupervised learning, regression, classification, clustering, and model evaluation. The best course I found to learn the basics of ML is:
  • Natural Language Processing: It is very important to learn the fundamental topics of NLP if you want to learn LLMs. Focus on the key concepts: tokenization, word embeddings, attention mechanisms, etc. I have given a few resources that might help you learn NLP:

 

Step 2: Understand Core Architectures Behind Large Language Models

 
Large language models rely on various architectures, with transformers being the most prominent foundation. Understanding these different architectural approaches is essential for working effectively with modern LLMs. Here are the key topics and resources to enhance your understanding:

  • Understand transformer architecture and emphasize on understanding self-attention, multi-head attention, and positional encoding.
  • Start with Attention Is All You Need, then explore different architectural variants: decoder-only models (GPT series), encoder-only models (BERT), and encoder-decoder models (T5, BART).
  • Use libraries like Hugging Face’s Transformers to access and implement various model architectures.
  • Practice fine-tuning different architectures for specific tasks like classification, generation, and summarization.

 

Recommended Learning Resources

 

Step 3: Specializing in Large Language Models

 
With the basics in place, it’s time to focus specifically on LLMs. These courses are designed to deepen your understanding of their architecture, ethical implications, and real-world applications:

  • LLM University – Cohere (Recommended): Offers both a sequential track for newcomers and a non-sequential, application-driven path for seasoned professionals. It provides a structured exploration of both the theoretical and practical aspects of LLMs.
  • Stanford CS324: Large Language Models (Recommended): A comprehensive course exploring the theory, ethics, and hands-on practice of LLMs. You will learn how to build and evaluate LLMs.
  • Maxime Labonne Guide (Recommended): This guide provides a clear roadmap for two career paths: LLM Scientist and LLM Engineer. The LLM Scientist path is for those who want to build advanced language models using the latest techniques. The LLM Engineer path focuses on creating and deploying applications that use LLMs. It also includes The LLM Engineer’s Handbook, which takes you step by step from designing to launching LLM-based applications.
  • Princeton COS597G: Understanding Large Language Models: A graduate-level course that covers models like BERT, GPT, T5, and more. It is Ideal for those aiming to engage in deep technical research, this course explores both the capabilities and limitations of LLMs.
  • Fine Tuning LLM Models – Generative AI Course When working with LLMs, you will often need to fine-tune LLMs, so consider learning efficient fine-tuning techniques such as LoRA and QLoRA, as well as model quantization techniques. These approaches can help reduce model size and computational requirements while maintaining performance. This course will teach you fine-tuning using QLoRA and LoRA, as well as Quantization using LLama2, Gradient, and the Google Gemma model.
  • Finetune LLMs to teach them ANYTHING with Huggingface and Pytorch | Step-by-step tutorial: It provides a comprehensive guide on fine-tuning LLMs using Hugging Face and PyTorch. It covers the entire process, from data preparation to model training and evaluation, enabling viewers to adapt LLMs for specific tasks or domains.

 

Step 4: Build, Deploy & Operationalize LLM Applications

 
Learning a concept theoretically is one thing; applying it practically is another. The former strengthens your understanding of fundamental ideas, while the latter enables you to translate those concepts into real-world solutions. This section focuses on integrating large language models into projects using popular frameworks, APIs, and best practices for deploying and managing LLMs in production and local environments. By mastering these tools, you’ll efficiently build applications, scale deployments, and implement LLMOps strategies for monitoring, optimization, and maintenance.

  • Application Development: Learn how to integrate LLMs into user-facing applications or services.
  • LangChain: LangChain is the fast and efficient framework for LLM projects. Learn how to build applications using LangChain.
  • API Integrations: Explore how to connect various APIs, like OpenAI’s, to add advanced features to your projects.
  • Local LLM Deployment: Learn to set up and run LLMs on your local machine.
  • LLMOps Practices: Learn the methodologies for deploying, monitoring, and maintaining LLMs in production environments.

 

Recommended Learning Resources & Projects

Building LLM applications:

Local LLM Deployment:

Deploying & Managing LLM applications In Production Environments:

GitHub Repositories:

  • Awesome-LLM: It is a curated collection of papers, frameworks, tools, courses, tutorials, and resources focused on large language models (LLMs), with a special emphasis on ChatGPT.
  • Awesome-langchain: This repository is the hub to track initiatives and projects related to LangChain’s ecosystem.

 

Step 5: RAG & Vector Databases

 
Retrieval-Augmented Generation (RAG) is a hybrid approach that combines information retrieval with text generation. Instead of relying only on pre-trained knowledge, RAG retrieves relevant documents from external sources before generating responses. This improves accuracy, reduces hallucinations, and makes models more useful for knowledge-intensive tasks.

  • Understand RAG & its Architectures: Standard RAG, Hierarchical RAG, Hybrid RAG etc.
  • Vector Databases: Understand how to implement vector databases with RAG. Vector databases store and retrieve information based on semantic meaning rather than exact keyword matches. This makes them ideal for RAG-based applications as these allow for fast and efficient retrieval of relevant documents.
  • Retrieval Strategies: Implement dense retrieval, sparse retrieval, and hybrid search for better document matching.
  • LlamaIndex & LangChain: Learn how these frameworks facilitate RAG.
  • Scaling RAG for Enterprise Applications: Understand distributed retrieval, caching, and latency optimizations for handling large-scale document retrieval.

 

Recommended Learning Resources & Projects

Basic Foundational courses:

Advanced RAG Architectures & Implementations:

Enterprise-Grade RAG & Scaling:

 

Step 6: Optimize LLM Inference

 
Optimizing inference is crucial for making LLM-powered applications efficient, cost-effective, and scalable. This step focuses on techniques to reduce latency, improve response times, and minimize computational overhead.
 

Key Topics

  • Model Quantization: Reduce model size and improve speed using techniques like 8-bit and 4-bit quantization (e.g., GPTQ, AWQ).
  • Efficient Serving: Deploy models efficiently with frameworks like vLLM, TGI (Text Generation Inference), and DeepSpeed.
  • LoRA & QLoRA: Use parameter-efficient fine-tuning methods to enhance model performance without high resource costs.
  • Batching & Caching: Optimize API calls and memory usage with batch processing and caching strategies.
  • On-Device Inference: Run LLMs on edge devices using tools like GGUF (for llama.cpp) and optimized runtimes like ONNX and TensorRT.

 

Recommended Learning Resources

 

Wrapping Up

 
This guide covers a comprehensive roadmap to learning and mastering LLMs in 2025. I know it might seem overwhelming at first, but trust me — if you follow this step-by-step approach, you’ll cover everything in no time. If you have any questions or need more help, do comment.
 
 

Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.



Source link

Continue Reading

Jobs & Careers

HCLSoftware Launches Domino 14.5 With Focus on Data Privacy and Sovereign AI

Published

on


HCLSoftware, a global enterprise software leader, launched HCL Domino 14.5 on July 7 as a major upgrade, specifically targeting governments and organisations operating in regulated sectors that are concerned about data privacy and digital independence.

A key feature of the new release is Domino IQ, a sovereign AI extension built into the Domino platform. This new tool gives organisations full control over their AI models and data, helping them comply with regulations such as the European AI Act.

 It also removes dependence on foreign cloud services, making it easier for public sector bodies and banks to protect sensitive information.

“The importance of data sovereignty and avoiding unnecessary foreign government influence extends beyond SaaS solutions and AI. Specifically for collaboration – the sensitive data within email, chat, video recordings and documents. With the launch of Domino+ 14.5, HCLSoftware is helping over 200+ government agencies safeguard their sensitive data,” said Richard Jefts, executive vice president and general manager at HCLSoftware

The updated Domino+ collaboration suite now includes enhanced features for secure messaging, meetings, and file sharing. These tools are ready to deploy and meet the needs of organisations that handle highly confidential data.

The platform is supported by IONOS, a leading European cloud provider. Achim Weiss, CEO of IONOS, added, “Today, more than ever, true digital sovereignty is the key to Europe’s digital future. That’s why at IONOS we are proud to provide the sovereign cloud infrastructure for HCL’s sovereign collaboration solutions.”

Other key updates in Domino 14.5 include achieving BSI certification for information security, the integration of security event and incident management (SEIM) tools to enhance threat detection and response, and full compliance with the European Accessibility Act, ensuring that all web-based user experiences are inclusive and accessible to everyone.

With the launch of Domino 14.5, HCLSoftware is aiming to be a trusted technology partner for public sector and highly regulated organisations seeking control, security, and compliance in their digital operations.



Source link

Continue Reading

Trending