Connect with us

AI Research

The Machine Learning Lessons I’ve Learned This Month

Published

on


in machine learning are the same.

Coding, waiting for results, interpreting them, returning back to coding. Plus, some intermediate presentations of one’s progress. But, things mostly being the same does not mean that there’s nothing to learn. Quite on the contrary! Two to three years ago, I started a daily habit of writing down lessons that I learned from my ML work. In looking back through some of the lessons from this month, I found three practical lessons that stand out:

  1. Keep logging simple
  2. Use an experimental notebook
  3. Keep overnight runs in mind

Keep logging simple

For years, I used Weights & Biases (W&B)* as my go-to experiment logger. In fact, I have once been in the top 5% of all active users. The stats in below figure tell me that, at that time, I’ve trained close to 25000 models, used a cumulative 5000 hours of compute, and did more than 500 hyperparameter searches. I used it for papers, for big projects like weather prediction with large datasets, and for tracking countless small-scale experiments.

My once upon a time stats of using W&B for experiment logging. Image by the author.

And W&B really is a great tool: if you want beautiful dashboards and are collaborating** with a team, W&B shines. And, until recently, while reconstructing data from trained neural networks, I ran multiple hyperparameter sweeps and W&B’s visualization capabilities were invaluable. I could directly compare reconstructions across runs.

But I realized that for most of my research projects, W&B was overkill. I rarely revisited individual runs, and once a project was done, the logs just sat there, and I did nothing with them ever after. When I then refactored the mentioned data reconstruction project, I thus explicitly removed the W&B integration. Not because anything was wrong with it, but because it wasn’t necessary.

Now, my setup is much simpler. I just log selected metrics to CSV and text files, writing directly to disk. For hyperparameter searches, I rely on Optuna. Not even the distributed version with a central server — just local Optuna, saving study states to a pickle file. If something crashes, I reload and continue. Pragmatic and sufficient (for my use cases).

The key insight here is this: logging is not the work. It’s a support system. Spending 99% of your time deciding on what you want to log — gradients? weights? distributions? and at which frequency? — can easily distract you from the actual research. For me, simple, local logging covers all needs, with minimal setup effort.

Maintain experimental lab notebooks

In December 1939, William Shockley wrote down an idea into his lab notebook: replace vacuum tubes with semiconductors. Roughly 20 years later, Shockley and two colleagues at Bell Labs were awarded Nobel Prizes for the invention of the modern transistor.

While most of us aren’t writing Nobel-worthy entries into our notebooks, we can still learn from the principle. Granted, in machine learning, our laboraties don’t have chemicals or test tubes, as we all envision when we think about a laboratory. Instead, our labs often are our computers; the same device that I use to write these lines has trained countless models over the years. And these labs are inherently portably, especially when we are developing remotely on high-performance compute clusters. Even better, thanks to highly-skilled administrative stuff, these clusters are running 24/7 — so there’s always time to run an experiment!

But, the question is, which experiment? Here, a former colleague introduced me to the idea of mainting a lab notebook, and lately I’ve returned to it in the simplest form possible. Before starting long-running experiments, I write down:

what I’m testing, and why I’m testing it.

Then, when I come back later — usually the next morning — I can immediately see which results are ready and what I had hoped to learn. It’s simple, but it changes the workflow. Instead of just “rerun until it works,” these dedicated experiments become part of a documented feedback loop. Failures are easier to interpret. Successes are easier to replicate.

Run experiments overnight

That’s a small, but painful lessons that I (re-)learned this month.

On a Friday evening, I discovered a bug that might affect my experiment results. I patched it and reran the experiments to validate. By Saturday morning, the runs had finished — but when I inspected the results, I realized I had forgotten to include a key ablation. Which meant … another full day of waiting.

In ML, overnight time is precious. For us programmers, it’s rest. For our experiments, it’s work. If we don’t have an experiment running while we sleep, we’re effectively wasting free compute cycles.

That doesn’t mean you should run experiments just for the sake of it. But whenever there is a meaningful one to launch, starting them in the evening is the perfect time. Clusters are often under-utilized and resources are more quickly available, and — most importantly — you will have results to analyse the next morning.

A simple trick is to plan this deliberately. As Cal Newport mentions in his book “Deep Work”, good workdays start the night before. If you know tomorrow’s tasks today, you can set up the right experiments in time.


* That ain’t bashing W&B (it would have been the same with, e.g., MLFlow), but rather asking users to evaluate what their project goals are, and then spend the majority of time on pursuing that goals with utmost focus.

** Footnote: mere collaborating is in my eyes not enough to warrant using such shared dashboards. You need to gain more insights from such shared tools than the time spent setting them up.



Source link

AI Research

Northwestern Magazine: Riding the AI Wave

Published

on


Although Hammond says he barely remembers his life before computers and coding, there was indeed a time when his world was much more analog. Hammond grew up on the East Coast and spent his high school years in Salt Lake City, where his mother was a social worker and his father was a professor of archaeology at the University of Utah. Over the course of 50 years, Philip C. Hammond excavated several sites in the Middle East and made dozens of trips to Jordan, earning him the nickname Lion of Petra. Kris joined these expeditions for three summers, working as his father’s surveyor and draftsman.

“Now, once a week, I ask ChatGPT for a biography of my father, as an experiment,” Hammond says, bemused. “Sometimes, it gives me a beautifully inaccurate bio that makes him sound like Indiana Jones. Other times, it says he is a tech entrepreneur and that I have followed in his footsteps.”

While those biographical tidbits are more AI-generated falsehoods, Hammond and his father have both traced intelligence from different worlds — one etched in stone and another in silicon. Wanting a deeper understanding of the meaning of intelligence and thought, Hammond studied philosophy as an undergraduate at Yale University and planned to go law school after graduation. But his trail diverged when a fellow member of a local sci-fi club suggested that Hammond, who had taken one computer science class, try working as a programmer.

“After nine months as a programmer, I decided that’s what I wanted to do for a living,” Hammond says.

That sci-fi club guy was Chris Riesbeck, who is also now a professor of computer science at McCormick. Hammond earned his doctorate in computer science from Yale in 1986. But he didn’t abandon philosophy entirely. Instead, he applied those abstract frameworks — consciousness, knowledge, creativity, logic and the nature of reason — to the pursuit of intelligent systems.

“The structure of thought always fascinated me,” Hammond says. “Looking at it from the perspective of how humans think and how machines ‘think’ — and how we can ‘think’ together — became a driver for me.”

But the word “think” is tenuous in this context, he says. There’s a fundamental and important distinction between true human cognition and what current AI can do — namely, sophisticated mimicry. AI isn’t trying to critically assess data to devise correct answers, says Hammond. Instead, it’s a probabilistic engine, sifting through language likelihoods to finish a sentence — like the predictive text you might see on your phone while composing a message. It is seeking the most likely conclusion to any given string of words.

“These are responsive systems,” he says. “They aren’t reasoning. They just hold words together. That’s why they have problems answering questions about recent events.”





Source link

Continue Reading

AI Research

America’s Biggest Cyber Crisis Isn’t Just Artificial Intelligence

Published

on


In 2021, Patrick Hearn wrote “Digital Identity Is a National Security Issue,” where he argued that the U.S. government has put the safeguarding of digital identity on the back burner, despite hosts of threats from foreign adversaries. Four years later, we asked Patrick to revisit his analysis in light of advancements in cyber capabilities of both the United States and its adversaries.Image: U.S. Air Force (Photo by Airman 1st Class Andrew J. Alvarado)In your 2021 article, “Digital Identity Is a National Security Issue,” you argue the federal government has long treated digital identity as a secondary issue and should do more





Source link

Continue Reading

AI Research

Futuri Announces Advancements to TopLine AI, Featuring Instant Custom Research, Sales Presentations, and CRM Integration to Help Teams Close Business Faster

Published

on


AUSTIN, Texas, Sept. 3, 2025 /PRNewswire/ — Futuri has launched major AI upgrades to TopLine, its sales intelligence system trusted by media companies worldwide. Using Agentic AI that integrates directly into CRMs like Salesforce and HubSpot, TopLine is redefining how Sales Executives prepare, present, close, and renew business.

Closing the Gap Between Data and Revenue

With new CRM integration and AI-driven automation, TopLine equips sales teams to deliver custom client research and full presentations instantly, designed and ready to present without leaving the CRM they already use daily. This innovation reduces the need to bounce between multiple research tools, eliminating traditional bottlenecks and allowing sales teams to move at the speed of sales.

Key new CRM integration capabilities include:

  • Direct sync with Salesforce, HubSpot, and other CRMs ensures that sales teams can seamlessly integrate TopLine intelligence into existing workflows.
  • Automatic Personality Profiles + Digital Research – Instant insights into buyers and their markets.
  • AI-Designed Presentations in Minutes – Polished, data-backed, and client-ready.
  • Pre-Built Broadcast + Digital Schedules – Campaign proposals included, giving AEs a fast onramp to new revenue opportunities (with easy customization).
  • Built-In Co-Op Opportunity Finder – Surfaces hidden funding sources to help close more deals.
  • Trend + Business Opportunity Identification – Pinpoints where growth potential is emerging.

Accelerating Business Growth

“TopLine has always been about giving broadcasters a competitive edge in sales,” said Kathy Eagle, VP/GM of TopLine at Futuri. “We have built AI models that deliver research, creative, and campaign proposals in minutes instead of days. This empowers sellers to build trust faster, present smarter, and close more.”

Commitment to Broadcasters

These enhancements underscore Futuri’s mission to help broadcasters win more business in a competitive media landscape with less manual work. TopLine shortens cycles, improves win rates, and unlocks new revenue streams so sales teams can spend more time building relationships.

Media Contact:
For more information: www.FuturiMedia.com/TopLineCRM
Contact Mary Rogers | [email protected] | 877-221-7979 ext 450

About Futuri
Futuri is a global leader in AI solutions that drive audience and revenue growth for broadcasters, digital publishers, and content creators. Founded in 2009, Austin-based Futuri is at the forefront of AI-powered audience engagement and sales technology, trusted by thousands of broadcasters around the world. Key solutions include TopLine, a sales intelligence system designed to enhance local advertising sales and expedite the sales cycle; TopicPulse, an AI-powered story discovery system that provides real-time insights and predictions about trending topics; AudioAI, a cutting-edge system that enables broadcasters to create AI-powered hosts, streamline commercial production, and automate podcast creation; and POST, a system that automatically converts broadcasts into podcasts. More information about Futuri is available at www.FuturiMedia.com.

SOURCE Futuri Media



Source link

Continue Reading

Trending