Connect with us

AI Research

Researchers optimize AI systems for science

Published

on


Rui Zhang was joined by two members of his research group, Ryo Kamoi and Yusen Zhang, to present some of their previous work at last year’s Conference on Language Modeling in Philadelphia. Credit: Pennsylvania State University

Using services like ChatGPT or Microsoft Copilot can sometimes seem like magic—to the point it can be easy to forget about the advanced science running behind the scenes of any artificial intelligence (AI) system. Like any complex system, however, there is always room for improvement and optimization, according to Rui Zhang, assistant professor of computer science and engineering in the Penn State School of Electrical Engineering and Computer Science.

Zhang and his research group recently authored three papers introducing new approaches to processing high-resolution images and automatically prompting better responses from AI systems. The papers are currently available on the arXiv preprint server.

The first will be presented at the 63rd Annual Meeting of the Association for Computational Linguistics, July 27 through Aug. 1 in Vienna, Austria. The second will be shared at the 2025 International Conference on Computer Vision, Oct. 19–23 in Honolulu, Hawaii, and the third was presented at the 13th International Conference on Learning Representations, April 24–28 in Singapore.

In the following Q&A, Zhang discussed his group’s work, how it can improve the efficiency and usefulness of AI and some strategies individuals can employ to get more value out of their personal AI use.

What is prompt engineering? Are there specific things readers can do to write better prompts for an AI system?

Prompt engineering is the process of designing effective inputs—or “prompts”—that guide AI systems like ChatGPT to produce better responses. Since these systems are sensitive to how questions are asked, a well-crafted prompt can significantly improve the system’s output.

For example, instead of asking, “summarize this article,” you might say, “summarize this article in three bullet points for a high school student.” The extra context helps the AI tailor its response. For everyday users, the key strategies are to be clear, specific and goal-oriented—don’t be afraid to try multiple prompt versions to refine the results.

What are the benefits of automating and optimizing prompt generation?

While good prompt engineering can greatly improve AI performance, writing the best prompt often takes time, experimentation and expertise in the subject matter included in the prompt. In our research, we developed a method called GReaTer that allows AI systems to automatically generate and refine prompts using gradient-based optimization, a type of algorithm that excels at optimizing data in AI systems.

We also developed GReaTerPrompt, a user-friendly and open-source toolkit built on the GReaTer method, which enables models to automatically generate and refine prompts for a wide range of tasks. Automating this process means AI can adapt to new tasks with less human input, improving accuracy, saving time and lowering costs.

This is especially valuable for users who lack the time or expertise in a subject to come up with a better prompt. By providing an open-source toolkit, which is freely available for anyone to download, modify or share, we effectively distribute access to our work for all interested users.

How did you measure the effectiveness of GReaTer? Are there real-world tools that could improve with its implementation?

We evaluated GReaTer on a wide variety of language reasoning and mathematical problem-solving tasks, such as answering complex questions, solving logic puzzles and performing mathematical computations. The results showed that GReaTer significantly improved performance compared to standard prompting—especially for smaller language models that typically struggle with these tasks because they are limited with specialized parameters for specific tasks and questions.

In some cases, these GReaTer-optimized smaller models rivaled much larger ones in quality. Real-world applications that could benefit include AI-powered tutors, writing assistants, customer support agents and any tool that needs to adapt quickly to different users or topics without manual reprogramming.

What is HRScene, and why do researchers care about ‘high-resolution image understanding?’

HRScene is a new benchmark we developed to evaluate how well modern vision-language models like GPT-4V, Gemini or Claude can understand high-resolution, information-dense images with millions of pixels. Although these models can answer questions about images using natural language, they often fall short when dealing with large, detailed visuals.

High-resolution image understanding is critical because many real-world scientific and societal applications depend on subtle, localized details that may be missed by models not equipped to handle large-scale visual input. HRScene includes curated examples from domains like radiology, plant phenotyping, remote sensing and astronomy, which will help accelerate the development of AI systems capable of interpreting visuals and improve their assessment accuracy.

What are the applications of accurate and efficient high-resolution image processing?

The potential impact spans many scientific and social domains. In , high-resolution AI tools could help interpret radiology scans like MRIs or CTs more effectively, leading to earlier and more accurate diagnoses. In agriculture, AI could assist with plant phenotyping—analyzing traits like leaf structure or disease presence from detailed images—to improve crop yields and sustainability. In environmental science and , high-resolution satellite imagery is used for disaster monitoring, urban planning and climate research.

Astronomy could also be improved, as researchers currently analyze telescope imagery at extremely high resolutions to detect faint or distant celestial objects. AI systems that can reliably process such data could accelerate , enhance public health tools and improve responses to global challenges.

More information:
Wenliang Zheng et al, GREATERPROMPT: A Unified, Customizable, and High-Performing Open-Source Toolkit for Prompt Optimization, arXiv (2025). DOI: 10.48550/arxiv.2504.03975

Yusen Zhang et al, HRScene: How Far Are VLMs from Effective High-Resolution Image Understanding?, arXiv (2025). DOI: 10.48550/arxiv.2504.18406

Sarkar Snigdha Sarathi Das et al, GReaTer: Gradients over Reasoning Makes Smaller Language Models Strong Prompt Optimizers, arXiv (2024). DOI: 10.48550/arxiv.2412.09722

Journal information:
arXiv


Citation:
Researchers optimize AI systems for science (2025, July 31)
retrieved 31 July 2025
from https://techxplore.com/news/2025-07-optimize-ai-science.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AI Research

Billionaire Philipe Laffont Just Sold Coatue Management’s Stake in Super Micro Computer and Piled Into Another Artificial Intelligence (AI) Giant Up Over 336,000% Since Its IPO

Published

on


Philipe Laffont is part of an elite group of investors called the Tiger Cubs, who worked for Julian Robertson’s Tiger Management in the 1990s.

In the 1990s, an elite group of investors worked for a tech-focused hedge fund called Tiger Management, led by the legendary investor Julian Robertson. Not only did Robertson mentor this group of investors, but he would go on to seed many of their future hedge funds as the talented group, referred to as the Tiger Cubs, went on to become great investors in their own right.

Philippe Laffont, the founder of Coatue Management, is part of this group, and is now viewed as one of the great tech investors of the modern era. Coatue Management’s equity holdings were valued at roughly $35 billion at the end of the second quarter. That’s why investors are always paying attention to which stocks Coatue is buying and selling.

In the second quarter, the fund sold its stake in Super Micro Computer (SMCI -5.42%) and piled into another artificial intelligence (AI) giant that generated a total return over 336,000% since its initial public offering.

Image source: Getty Images.

Super Micro Computer: Beating the shorts so far

AI and tech infrastructure and server maker Super Micro Computer has been a controversial and volatile play for the past year. In August 2024, short-seller Hindenburg Research came out with a major short report alleging potential accounting fraud at the company. The report said that Supermicro rehired executives who had been a part of an accounting scandal at the company in 2018 that involved understating expenses and overstating revenue.

The stock got hit hard after Supermicro announced it would need to delay its annual 2024 filing to assess its internal controls. However, the company would eventually go on to file its 2024 10-K and did not need to restate any of its financial statements, a good sign for investors. Furthermore, management earlier this year also provided strong fiscal 2026 guidance of $40 billion in revenue, way ahead of consensus at the time. Supermicro’s fiscal year ends on June 30 of each year.

In August, shares struggled after the company reported lower-than-expected quarterly results and weaker-than-expected guidance, due to President Donald Trump’s tariffs, which resulted in less working capital in June and “specification changes from a major new customer.” Laffont and Coatue loaded up on the stock some time in the fourth quarter of 2024 and sold in the second quarter of this year, so the fund could have bought the dip after the short report and might have sold over concerns about tariffs, although that’s speculation. Supermicro’s stock is up about 46% this year, so Coatue seems to have timed its trade well.

Supermicro looks real cheap right now for a stock benefiting from the AI boom, trading around 16 times forward earnings. Tariffs are likely to be an ongoing issue but if AI demand remains strong, Supermicro, which supplies servers to the likes of Nvidia, should be a major beneficiary. The stock may remain volatile, but I think investors can take a position in the more speculative part of their portfolio.

Oracle: A longtime tech player benefiting from AI

With a market cap of nearly $664 billion, Oracle (ORCL -5.97%) isn’t part of the “Magnificent Seven,” but it’s another large tech company expected to benefit from the AI capital expenditure boom. Coatue purchased over 3.8 million shares in the second quarter, valued at over $843 million.

The cloud giant offers clients the ability to tap into a number of AI solutions including generative AI and machine learning capabilities that provide automation tools and AI application development, among other services. Similar to Microsoft and Amazon, although not as dominant, Oracle’s position as a cloud provider positions the company well to be a first point of contact for clients looking to add AI capabilities.

In the company’s most recent earnings report for its fourth quarter of fiscal 2025, which ended May 31, Oracle reported results ahead of Wall Street estimates and said that cloud infrastructure revenue sales should increase 70% in fiscal year 2026, after generating 52% growth in fiscal 2025.

Oracle CEO Larry Ellison said the company is particularly well positioned because it has a strong data advantage and has developed one of the most comprehensive databases in the world. “Our applications take all of your application data and make that data available to the most popular AI models,” he said on Oracle’s earnings call for the company’s fiscal fourth quarter of 2025.

If you like ChatGPT, you use ChatGPT. If you like Grok, you use Grok. You use that in the Oracle Cloud. We are the key enabler for enterprises to use their own data and models. No one else is doing that.

Having gone public in 1986, Oracle has been a major tech disruptor for decades. The stock is up over 336,000% since its initial public offering and also up over 41% this year. Trading at 34 times forward earnings, the stock is not necessarily cheap, but given its track record and strong expected growth in cloud infrastructure, Oracle can benefit from AI without being as much in the spotlight as some of the Magnificent Seven names.

Bram Berkowitz has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Amazon, Microsoft, Nvidia, and Oracle. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

TikTok Salaries Revealed: How Much AI, E-Commerce Workers Make in 2025

Published

on


TikTok’s US plans are up in the air due to a divest-or-ban law that puts its future in jeopardy. But it’s still offering six-figure salaries to workers this year in key areas like e-commerce and artificial intelligence.

It’s sought to hire data scientists to sharpen its search algorithm, court workers to grow its e-commerce platform TikTok Shop, and bring in machine learning engineers to improve its content feed and recommendations.

The company’s jobs portal lists over 1,800 open roles in the US in cities like Austin, San Jose, Seattle, and New York.

Like other Big Tech firms, work expectations at TikTok and its owner, ByteDance, are demanding. The company runs performance reviews twice a year, and low scorers can be placed on performance-improvement plans or even shown the door. But the opportunity to work at one of the most influential tech companies in the world continues to draw in talent.

Outside e-commerce, TikTok is shaking up areas like music marketing and young people’s news habits. If it can navigate political tides in the US and China, where ByteDance was founded, it will stand alongside YouTube and a few other players in shaping the next phase of media.

“From a career growth standpoint, you have access to huge budgets and big names,” a former staffer said of working at TikTok. “Everyone in the industry wants to talk to you.”

While TikTok and ByteDance don’t disclose salary information publicly (unless required by state law), they do submit pay ranges in federal filings when they look to hire workers from outside the US.

To understand more about the company’s pay rates, Business Insider reviewed thousands of TikTok salary offers for foreign hires at the company, as well as its owner, ByteDance, for the first three quarters of this reporting year that ran through June 30. The results don’t include equity or other benefits that employees often receive in addition to base pay. But they paint a picture of the range of pay a worker might expect in roles like software engineering, data science, or product management.

The foreign-hire data shows a wide range of salaries at the companies. For example, a finance representative could earn $65,000 a year, and a global head of product and design position could fetch a $949,349 annual salary.

Backend software engineers at TikTok could earn between $144,000 and $301,158, based on the salary data, though rates increased beyond that for specialties like trust and safety. Data scientist positions at TikTok were generally offered between $85,821 and $283,629 — or more in specific areas like e-commerce. For TikTok machine learning scientists, the range was between $168,000 and $390,000, while general marketing managers were offered between $85,000 and $430,000.

These salary offers fall in line with pay rates in federal applications at other Big Tech firms. Meta’s first-quarter visa filings revealed it offered data scientists between $122,760 and $270,000, for example. Meanwhile, a staff software engineer at Google could receive between $220,000 and $323,000, according to the company’s first-quarter filings.

Here are the salary ranges TikTok and ByteDance offered for other roles in key business areas, based on recent applications. TikTok and ByteDance did not respond to requests for comment.

E-commerce and TikTok Shop roles

TikTok Shop – Celebrity Team Live Operation Manager: $94,000

TikTok Shop – US Data Analyst – Logistics: $128,000

TikTok Shop – Campaign Strategy Operations Manager: $132,000

TikTok Shop – Category Manager – Health: $135,000

TikTok Shop – Anti-Fraud Ops Program Mgr – Global Selling: $180,000

TikTok Shop – Data Scientist: $218,000 to $304,000

Product Manager, User Growth Customer Lifecycle-TikTok Shop: $220,000

Strategy Manager, E-Commerce: $228,000 to $230,000

Software Engineer – E-commerce Recommendation Infrastructure: $237,000 to $315,207

TikTok Shop – Inventory Placement Strategy Manager: $250,000

TikTok Shop- Compliance Operation: $257,600

Senior Machine Learning Engineer, E-commerce: $320,000

Tech Lead – E-commerce Recommendation Infrastructure: $320,113

Logistics Procurement Lead, TikTok US E-commerce: $350,000

Senior Data Scientist, Content E-commerce: $350,000

Tech Lead, Global E-commerce Governance Platform: $365,000

Global E-commerce Solutions Manager: $480,000

AI and machine learning roles

Software Engineer (AI Platform): $144,000

Research Scientist (TikTok AI Privacy): $188,000

Product Manager GenAI Safety, Trust & Safety: $218,400

Senior Product Designer, Creation (AI Projects): $221,368

Machine Learning Engineer – Computer Vision: $228,960

Software Engineer, Machine Learning Infrastructure: $270,000 to $320,783

Site Reliability Engineer, AI Applications: $276,000

AI Product Manager: $300,010

Product Manager Lead, Emerging Product & AI Safety: $336,000

AI Security Researcher – Security Flow: $340,000

Senior Machine Learning Engineer, TikTok Recommendation: $386,115

Search roles

Search Product Operations – Creator Search Optimization: $110,000

Software Engineer – TikTok Search Business Infrastructure: $154,880 to $214,720

Product Manager, Search Ads: $205,000

Machine Learning Engineer – Search Ads: $229,200 to $354,000

Machine Learning Engineer – TikTok Search: $241,200 to $300,000

Senior Machine Learning Engineer – TikTok Search Business: $268,920

Product Manager – TikTok Search: $287,500

Product Manager, Search Content Ecosystem: $400,000

Leader of Search and Recommendation Product (ByteDance): $540,552

Search Ads Closed-loop Product Manager: $564,000





Source link

Continue Reading

AI Research

How to Build a Conversational Research AI Agent with LangGraph: Step Replay and Time-Travel Checkpoints

Published

on


In this tutorial, we aim to understand how LangGraph enables us to manage conversation flows in a structured manner, while also providing the power to “time travel” through checkpoints. By building a chatbot that integrates a free Gemini model and a Wikipedia tool, we can add multiple steps to a dialogue, record each checkpoint, replay the full state history, and even resume from a past state. This hands-on approach enables us to see, in real-time, how LangGraph’s design facilitates the tracking and manipulation of conversation progression with clarity and control. Check out the FULL CODES here.

!pip -q install -U langgraph langchain langchain-google-genai google-generativeai typing_extensions
!pip -q install "requests==2.32.4"


import os
import json
import textwrap
import getpass
import time
from typing import Annotated, List, Dict, Any, Optional


from typing_extensions import TypedDict


from langchain.chat_models import init_chat_model
from langchain_core.messages import BaseMessage
from langchain_core.tools import tool


from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import ToolNode, tools_condition


import requests
from requests.adapters import HTTPAdapter, Retry


if not os.environ.get("GOOGLE_API_KEY"):
   os.environ["GOOGLE_API_KEY"] = getpass.getpass("🔑 Enter your Google API Key (Gemini): ")


llm = init_chat_model("google_genai:gemini-2.0-flash")

We start by installing the required libraries, setting up our Gemini API key, and importing all the necessary modules. We then initialize the Gemini model using LangChain so that we can use it as the core LLM in our LangGraph workflow. Check out the FULL CODES here.

WIKI_SEARCH_URL = "https://en.wikipedia.org/w/api.php"


_session = requests.Session()
_session.headers.update({
   "User-Agent": "LangGraph-Colab-Demo/1.0 (contact: [email protected])",
   "Accept": "application/json",
})
retry = Retry(
   total=5, connect=5, read=5, backoff_factor=0.5,
   status_forcelist=(429, 500, 502, 503, 504),
   allowed_methods=("GET", "POST")
)
_session.mount("https://", HTTPAdapter(max_retries=retry))
_session.mount("http://", HTTPAdapter(max_retries=retry))


def _wiki_search_raw(query: str, limit: int = 3) -> List[Dict[str, str]]:
   """
   Use MediaWiki search API with:
     - origin='*' (good practice for CORS)
     - Polite UA + retries
   Returns compact list of {title, snippet_html, url}.
   """
   params = {
       "action": "query",
       "list": "search",
       "format": "json",
       "srsearch": query,
       "srlimit": limit,
       "srprop": "snippet",
       "utf8": 1,
       "origin": "*",
   }
   r = _session.get(WIKI_SEARCH_URL, params=params, timeout=15)
   r.raise_for_status()
   data = r.json()
   out = []
   for item in data.get("query", {}).get("search", []):
       title = item.get("title", "")
       page_url = f"https://en.wikipedia.org/wiki/{title.replace(' ', '_')}"
       snippet = item.get("snippet", "")
       out.append({"title": title, "snippet_html": snippet, "url": page_url})
   return out


@tool
def wiki_search(query: str) -> List[Dict[str, str]]:
   """Search Wikipedia and return up to 3 results with title, snippet_html, and url."""
   try:
       results = _wiki_search_raw(query, limit=3)
       return results if results else [{"title": "No results", "snippet_html": "", "url": ""}]
   except Exception as e:
       return [{"title": "Error", "snippet_html": str(e), "url": ""}]


TOOLS = [wiki_search]

We set up a Wikipedia search tool with a custom session, retries, and a polite user-agent. We define _wiki_search_raw to query the MediaWiki API and then wrap it as a LangChain tool, allowing us to seamlessly call it within our LangGraph workflow. Check out the FULL CODES here.

class State(TypedDict):
   messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


llm_with_tools = llm.bind_tools(TOOLS)


SYSTEM_INSTRUCTIONS = textwrap.dedent("""
You are ResearchBuddy, a careful research assistant.
- If the user asks you to "research", "find info", "latest", "web", or references a library/framework/product,
 you SHOULD call the `wiki_search` tool at least once before finalizing your answer.
- When you call tools, be concise in the text you produce around the call.
- After receiving tool results, cite at least the page titles you used in your summary.
""").strip()


def chatbot(state: State) -> Dict[str, Any]:
   """Single step: call the LLM (with tools bound) on the current messages."""
   return {"messages": [llm_with_tools.invoke(state["msgs"])]}


graph_builder.add_node("chatbot", chatbot)


memory = InMemorySaver()
graph = graph_builder.compile(checkpointer=memory)

We define our graph state to store the running message thread and bind our Gemini model to the wiki_search tool, allowing it to call it when needed. We add a chatbot node and a tools node, wire them with conditional edges, and enable checkpointing with an in-memory saver. We now compile the graph so we can add steps, replay history, and resume from any checkpoint. Check out the FULL CODES here.

def print_last_message(event: Dict[str, Any]):
   """Pretty-print the last message in an event if available."""
   if "messages" in event and event["messages"]:
       msg = event["messages"][-1]
       try:
           if isinstance(msg, BaseMessage):
               msg.pretty_print()
           else:
               role = msg.get("role", "unknown")
               content = msg.get("content", "")
               print(f"\n[{role.upper()}]\n{content}\n")
       except Exception:
           print(str(msg))


def show_state_history(cfg: Dict[str, Any]) -> List[Any]:
   """Print a concise view of checkpoints; return the list as well."""
   history = list(graph.get_state_history(cfg))
   print("\n=== 📜 State history (most recent first) ===")
   for i, st in enumerate(history):
       n = st.next
       n_txt = f"{n}" if n else "()"
       print(f"{i:02d}) NumMessages={len(st.values.get('messages', []))}  Next={n_txt}")
   print("=== End history ===\n")
   return history


def pick_checkpoint_by_next(history: List[Any], node_name: str = "tools") -> Optional[Any]:
   """Pick the first checkpoint whose `next` includes a given node (e.g., 'tools')."""
   for st in history:
       nxt = tuple(st.next) if st.next else tuple()
       if node_name in nxt:
           return st
   return None

We add utility functions to make our LangGraph workflow easier to inspect and control. We use print_last_message to neatly display the most recent response, show_state_history to list all saved checkpoints, and pick_checkpoint_by_next to locate a checkpoint where the graph is about to run a specific node, such as the tools step. Check out the FULL CODES here.

config = {"configurable": {"thread_id": "demo-thread-1"}}


first_turn = {
   "messages": [
       {"role": "system", "content": SYSTEM_INSTRUCTIONS},
       {"role": "user", "content": "I'm learning LangGraph. Could you do some research on it for me?"},
   ]
}


print("\n==================== 🟢 STEP 1: First user turn ====================")
events = graph.stream(first_turn, config, stream_mode="values")
for ev in events:
   print_last_message(ev)


second_turn = {
   "messages": [
       {"role": "user", "content": "Ya. Maybe I'll build an agent with it!"}
   ]
}


print("\n==================== 🟢 STEP 2: Second user turn ====================")
events = graph.stream(second_turn, config, stream_mode="values")
for ev in events:
   print_last_message(ev)

We simulate two user interactions in the same thread by streaming events through the graph. We first provide system instructions and ask the assistant to research LangGraph, then follow up with a second user message about building an autonomous agent. Each step is checkpointed, allowing us to replay or resume from these states later. Check out the FULL CODES here.

print("\n==================== 🔁 REPLAY: Full state history ====================")
history = show_state_history(config)


to_replay = pick_checkpoint_by_next(history, node_name="tools")
if to_replay is None:
   to_replay = history[min(2, len(history) - 1)]


print("Chosen checkpoint to resume from:")
print("  Next:", to_replay.next)
print("  Config:", to_replay.config)


print("\n==================== ⏪ RESUME from chosen checkpoint ====================")
for ev in graph.stream(None, to_replay.config, stream_mode="vals"):
   print_last_message(ev)


MANUAL_INDEX = None 
if MANUAL_INDEX is not None and 0 <= MANUAL_INDEX < len(history):
   chosen = history[MANUAL_INDEX]
   print(f"\n==================== 🧭 MANUAL RESUME @ index {MANUAL_INDEX} ====================")
   print("Next:", chosen.next)
   print("Config:", chosen.config)
   for ev in graph.stream(None, chosen.config, stream_mode="values"):
       print_last_message(ev)


print("\n✅ Done. You added steps, replayed history, and resumed from a prior checkpoint.")

We replay the full checkpoint history to see how our conversation evolves across steps and identify a useful point to resume. We then “time travel” by restarting from a selected checkpoint, and optionally from any manual index, so we continue the dialogue exactly from that saved state.

In conclusion, we have gained a clearer picture of how LangGraph’s checkpointing and time-travel capabilities bring flexibility and transparency to conversation management. By stepping through multiple user turns, replaying state history, and resuming from earlier points, we can experience firsthand the power of this framework in building reliable research agents or autonomous assistants. We recognize that this workflow is not just a demo, but a foundation that we can extend into more complex applications, where reproducibility and traceability are as important as the answers themselves.


Check out the FULL CODES here. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.



Source link

Continue Reading

Trending