Connect with us

Jobs & Careers

10 Surprising Things You Can Do with Python’s time module

Published

on


10 Surprising Things You Can Do with Python's time module
Image by Editor | ChatGPT

 

Introduction

 
Most Python developers are familiar with the time module, for its handy functions such as time.sleep(). This makes the modiule the go-to for pausing execution, a simple but essential tool. However, the time module is far more versatile, offering a suite of functions for precise measurement, time conversion, and formatting that often go unnoticed. Exploring these capabilities can unlock more efficient ways to handle time-related tasks in your data science and other coding projects.

I’ve gotten some flack for the naming of previous “10 Surprising Things” articles, and I get it. “Yes, it is so very surprising that I can perform date and time tasks with the datetime module, thank you.” Valid criticism. However, the name is sticking because it’s catchy, so deal with it 🙂

In any case, here are 10 surprising and useful things you can do with Python’s time module.

 

1. Accurately Measure Elapsed Wall-Clock Time with time.monotonic()

 
While you might automatically go for time.time() to measure how long a function takes, it has a critical flaw: it’s based on the system clock, which can be changed manually or by network time protocols. This can lead to inaccurate or even negative time differences. A more robust solution is time.monotonic(). This function returns the value of a monotonic clock, which cannot go backward and is unaffected by system time updates. This really does make it the ideal choice for measuring durations reliably.

import time

start_time = time.monotonic()

# Simulate a task
time.sleep(2)

end_time = time.monotonic()
duration = end_time - start_time

print(f"The task took {duration:.2f} seconds.")

 

Output:

The task took 2.01 seconds.

 

2. Measure CPU Processing Time with time.process_time()

 
Sometimes, you don’t care about the total time passed (wall-clock time). Instead, you might want to know how much time the CPU actually spent executing your code. This is crucial for benchmarking algorithm efficiency, as it ignores time spent sleeping or waiting for I/O operations. The time.process_time() function returns the sum of the system and user CPU time of the current process, providing a pure measure of computational effort.

import time

start_cpu = time.process_time()

# A CPU-intensive task
total = 0
for i in range(10_000_000):
    total += i

end_cpu = time.process_time()
cpu_duration = end_cpu - start_cpu

print(f"The CPU-intensive task took {cpu_duration:.2f} CPU seconds.")

 

Output:

The CPU-intensive task took 0.44 CPU seconds.

 

3. Get High-Precision Timestamps with time.perf_counter()

 
For highly precise timing, especially for very short durations, time.perf_counter() is an essential tool. It returns the value of a high-resolution performance counter, which is the most accurate clock available on your system. This is a system-wide count, including time elapsed during sleep, which makes it perfect for benchmark scenarios where every nanosecond counts.

import time

start_perf = time.perf_counter()

# A very short operation
_ = [x*x for x in range(1000)]

end_perf = time.perf_counter()
perf_duration = end_perf - start_perf

print(f"The short operation took {perf_duration:.6f} seconds.")

 

Output:

The short operation took 0.000028 seconds.

 

4. Convert Timestamps to Readable Strings with time.ctime()

 
The output of time.time() is a float representing seconds since the “epoch” (January 1, 1970, for Unix systems). While useful for calculations, it’s not human-readable. The time.ctime() function takes this timestamp and converts it into a standard, easy-to-read string format, like ‘Thu Jul 31 16:32:30 2025’.

import time

current_timestamp = time.time()
readable_time = time.ctime(current_timestamp)

print(f"Timestamp: {current_timestamp}")
print(f"Readable Time: {readable_time}")

 

Output:

Timestamp: 1754044568.821037
Readable Time: Fri Aug  1 06:36:08 2025

 

5. Parse Time from a String with time.strptime()

 
Let’s say you have time information stored as a string and need to convert it into a structured time object for further processing. time.strptime() (string parse time) is your function. You provide the string and a format code that specifies how the date and time components are arranged. It returns a struct_time object, which is a tuple containing elements — like year, month, day, and so on — which can then be extracted.

import time

date_string = "31 July, 2025"
format_code = "%d %B, %Y"

time_struct = time.strptime(date_string, format_code)

print(f"Parsed time structure: {time_struct}")
print(f"Year: {time_struct.tm_year}, Month: {time_struct.tm_mon}")

 

Output:

Parsed time structure: time.struct_time(tm_year=2025, tm_mon=7, tm_mday=31, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=212, tm_isdst=-1)
Year: 2025, Month: 7

 

6. Format Time into Custom Strings with time.strftime()

 
The opposite of parsing is formatting. time.strftime() (string format time) takes a struct_time object (like the one returned by strptime or localtime) and formats it into a string according to your specified format codes. This gives you full control over the output, whether you prefer “2025-07-31” or “Thursday, July 31”.

import time

# Get current time as a struct_time object
current_time_struct = time.localtime()

# Format it in a custom way
formatted_string = time.strftime("%Y-%m-%d %H:%M:%S", current_time_struct)
print(f"Custom formatted time: {formatted_string}")

day_of_week = time.strftime("%A", current_time_struct)
print(f"Today is {day_of_week}.")

 

Output:

Custom formatted time: 2025-08-01 06:41:33
Today is Friday

 

7. Get Basic Timezone Information with time.timezone and time.tzname

 
While the datetime module (and libraries like pytz) are better for complex timezone handling, the time module offers some basic information. time.timezone provides the offset of the local non-DST (Daylight Savings Time) timezone in offset seconds west of UTC, while time.tzname is a tuple containing the names of the local non-DST and DST timezones.

import time

# Offset in seconds west of UTC
offset_seconds = time.timezone

# Timezone names (standard, daylight saving)
tz_names = time.tzname

print(f"Timezone offset: {offset_seconds / 3600} hours west of UTC")
print(f"Timezone names: {tz_names}")

 

Output:

Timezone offset: 5.0 hours west of UTC
Timezone names: ('EST', 'EDT')

 

8. Convert Between UTC and Local Time with time.gmtime() and time.localtime()

 
Working with different timezones can be tricky. A common practice is to store all time data in Coordinated Universal Time (UTC) and convert it to local time only for display. The time module facilitates this with time.gmtime() and time.localtime(). These functions take a timestamp in seconds and return a struct_time object — gmtime() returns it in UTC, while localtime() returns it for your system’s configured timezone.

import time

timestamp = time.time()

# Convert timestamp to struct_time in UTC
utc_time = time.gmtime(timestamp)

# Convert timestamp to struct_time in local time
local_time = time.localtime(timestamp)

print(f"UTC Time: {time.strftime('%Y-%m-%d %H:%M:%S', utc_time)}")
print(f"Local Time: {time.strftime('%Y-%m-%d %H:%M:%S', local_time)}")

 

Output:

UTC Time: 2025-08-01 10:47:58
Local Time: 2025-08-01 06:47:58

 

9. Perform the Inverse of time.time() with time.mktime()

 
time.localtime() converts a timestamp into a struct_time object, which is useful… but how do you go in the reverse direction? The time.mktime() function does exactly this. It takes a struct_time object (representing local time) and converts it back into a floating-point number representing seconds since the epoch. This is then useful for calculating future or past timestamps or performing date arithmetic.

import time

# Get current local time structure
now_struct = time.localtime()

# Create a modified time structure for one hour from now
future_struct_list = list(now_struct)
future_struct_list[3] += 1 # Add 1 to the hour (tm_hour)
future_struct = time.struct_time(future_struct_list)

# Convert back to a timestamp
future_timestamp = time.mktime(future_struct)

print(f"Current timestamp: {time.time():.0f}")
print(f"Timestamp in one hour: {future_timestamp:.0f}")

 

Output:

Current timestamp: 1754045415
Timestamp in one hour: 1754049015

 

10. Get Thread-Specific CPU Time with time.thread_time()

 
In multi-threaded applications, time.process_time() gives you the total CPU time for the entire process. But what if you want to profile the CPU usage of a specific thread? In this case, time.thread_time() is the function you are looking for. This function returns the sum of system and user CPU time for the current thread, allowing you to identify which threads are the most computationally expensive.

import time
import threading

def worker_task():
    start_thread_time = time.thread_time()

    # Simulate work
    _ = [i * i for i in range(10_000_000)]

    end_thread_time = time.thread_time()

    print(f"Worker thread CPU time: {end_thread_time - start_thread_time:.2f}s")

# Run the task in a separate thread
thread = threading.Thread(target=worker_task)
thread.start()
thread.join()

print(f"Total process CPU time: {time.process_time():.2f}s")

 

Output:

Worker thread CPU time: 0.23s
Total process CPU time: 0.32s

 

Wrapping Up

 
The time module is an integral and powerful segment of Python’s standard library. While time.sleep() is undoubtedly its most famous function, its capabilities for timing, duration measurement, and time formatting make it a handy tool for all sorts of practically-useful tasks.

By moving beyond the basics, you can learn new tricks for writing more accurate and efficient code. For more advanced, object-oriented date and time manipulation, be sure to check out surprising things you can do with the datetime module next.
 
 

Matthew Mayo (@mattmayo13) holds a master’s degree in computer science and a graduate diploma in data mining. As managing editor of KDnuggets & Statology, and contributing editor at Machine Learning Mastery, Matthew aims to make complex data science concepts accessible. His professional interests include natural language processing, language models, machine learning algorithms, and exploring emerging AI. He is driven by a mission to democratize knowledge in the data science community. Matthew has been coding since he was 6 years old.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Jobs & Careers

Top Life Sciences Companies Set Up GCCs in India in Last 5 Years, says EY India

Published

on


India has rapidly become a key hub for life sciences global capability centres (GCCs), with nearly half of the world’s top 50 life sciences companies establishing a presence in the country—most within the past five years, according to a new EY India report.

The report, Reimagining Life Sciences GCCs, highlighted how India’s GCCs have evolved from back-office support centres into strategic engines driving drug discovery, regulatory affairs, and commercial operations. 

“This isn’t about cost arbitrage anymore, it’s about India becoming indispensable to the global R&D pipeline,” said Arindam Sen, partner and GCC Sector Lead – technology, media & entertainment and telecommunications, EY India. “Life sciences multinationals are embedding their most strategic, knowledge-intensive work here, making India the epicentre for life sciences innovation, compliance, and future growth.”

According to EY, Indian life sciences GCCs now manage integrated functions across clinical trials, pharmacovigilance, supply chain analytics, biostatistics, and enabling services such as finance, HR, IT, and data analytics.

The study shows that GCCs handle 70% of finance, 75% of HR, 62% of supply chain, and 67% of IT functions for their global firms. Core functions have also grown sharply: 45% in drug discovery, 60% in regulatory affairs, 54% in medical affairs, and 50% in commercial operations.

India’s rise as a GCC hub is driven by four key factors: policy support from central and state governments, a strong talent pool of over 2.7 million life sciences professionals, access to a mature ecosystem including CROs, universities, and startups, and widespread infrastructure with scalable Grade-A office spaces.Looking ahead, the report noted that leading life sciences GCCs are positioning themselves as “twins” of their global headquarters, co-owning innovation and accelerating outcomes. Sen added, “Their evolution will be defined by future capabilities, operating model transformation, and building agile, multi-disciplinary teams skilled in areas like generative AI, bioinformatics, and digital health.”



Source link

Continue Reading

Jobs & Careers

AI PC Shipments to Hit 77 Million Units This Year: Report

Published

on


AI PCs will make up 31% of the worldwide PC market by the end of 2025, according to Gartner. Shipments are projected to hit 77.8 million units this year, with adoption accelerating to 55% of the global market in 2026 and becoming the standard by 2029.

“AI PCs are reshaping the market, but their adoption in 2025 is slowing because of tariffs and pauses in PC buying caused by market uncertainty,” said Ranjit Atwal, senior director analyst at Gartner, in a statement. Despite this, users are expected to continue investing in AI PCs to prepare for greater edge AI integration.

AI laptop adoption is projected to outpace desktops, with 36% of laptops expected to be AI-enabled by 2025, compared to 16% of desktops. By 2026, nearly 59% of laptops will fall into this category. Businesses largely favour x86 on Windows, which is expected to represent 71% of the AI business laptop market next year, while Arm-based laptops are anticipated to see more substantial consumer traction.

To support this shift, Gartner predicts that 40% of software vendors will prioritise developing AI features for PCs by 2026, up from just 2% in 2024. Small language models (SLMs) running locally on devices are expected to drive faster, more secure, and energy-efficient AI experiences.

Looking ahead, Gartner notes that vendors must focus on software-defined, customisable AI PCs to build stronger brand loyalty. “The future of AI PCs is in customisation,” Atwal said.

Still, the rapid rise of AI PCs masks an industry-wide “TOPS race.” While Microsoft, AMD, Intel, and Qualcomm position AI PCs as the future, performance claims around neural processing units (NPUs) remain contested. 

As industry leaders push for more TOPS, analysts warn that real-world AI performance may hinge less on specifications and more on practical workloads, software maturity, and user adoption.

The post AI PC Shipments to Hit 77 Million Units This Year: Report appeared first on Analytics India Magazine.



Source link

Continue Reading

Jobs & Careers

Hexaware, Replit Partner to Bring Secure Vibe Coding to Enterprises

Published

on


Hexaware Technologies has partnered with Replit to accelerate enterprise software development and make it more accessible through secure Vibe Coding. The collaboration combines Hexaware’s digital innovation expertise with Replit’s natural language-powered development platform, allowing both business users and engineers to create secure production-ready applications.

The partnership aims to help companies accelerate digital transformation by enabling teams beyond IT, such as product, design, sales and operations, to develop internal tools and prototypes without relying on traditional coding skills.

Amjad Masad, CEO of Replit, said, “Our mission is to empower entrepreneurial individuals to transform ideas into software—regardless of their coding experience or whether they’re launching a startup or innovating within an enterprise.”

Hexaware said the tie-up will facilitate faster innovation while maintaining security and governance. 

Sanjay Salunkhe, president and global head of digital and software services at Hexaware Technologies, noted, “By combining our vibe coding framework with Replit’s natural language interface, we’re giving enterprises the tools to accelerate development cycles while upholding the rigorous standards their stakeholders demand.”

The partnership will enable enterprises to democratise software development by allowing employees across departments to build and deploy secure applications using natural language. 

It will provide secure environments with features such as SSO, SOC 2 compliance and role-based access controls, further strengthened by Hexaware’s governance frameworks to meet enterprise IT standards. 

Teams will benefit from faster prototyping, with product and design groups able to test and iterate ideas quickly, reducing time-to-market. Sales, marketing and operations functions can also develop custom internal tools tailored to their workflows, avoiding reliance on generic SaaS platforms or long IT queues.

In addition, Replit’s agentic software architecture, combined with Hexaware’s AI expertise, will drive automation of complex backend tasks, enabling users to focus on higher-level logic and business outcomes.

The post Hexaware, Replit Partner to Bring Secure Vibe Coding to Enterprises appeared first on Analytics India Magazine.



Source link

Continue Reading

Trending