Connect with us

Jobs & Careers

Ray or Dask? A Practical Guide for Data Scientists

Published

on


Ray or Dask? A Practical Guide for Data Scientists
Image by Author | Ideogram

 

As data scientists, we handle large datasets or complex models that require a significant amount of time to run. To save time and achieve results faster, we utilize tools that execute tasks simultaneously or across multiple machines. Two popular Python libraries for this are Ray and Dask. Both help speed up data processing and model training, but they are used for different types of tasks.

In this article, we will explain what Ray and Dask are and when to choose each one.

 

What Are Dask and Ray?

 
Dask is a library used for handling large amounts of data. It is designed to work in a way that feels familiar to users of pandas, NumPy, or scikit-learn. Dask breaks data and tasks into smaller parts and runs them in parallel. This makes it perfect for data scientists who want to scale up their data analysis without learning many new concepts.

Ray is a more general tool that helps you build and run distributed applications. It is particularly strong in machine learning and AI tasks.

Ray also has extra libraries built on top of it, like:

  • Ray Tune for tuning hyperparameters in machine learning
  • Ray Train for training models on multiple GPUs
  • Ray Serve for deploying models as web services

Ray is great if you want to build scalable machine learning pipelines or deploy AI applications that need to run complex tasks in parallel.

 

Feature Comparison

 
A structured comparison of Dask and Ray based on core attributes:
 

Feature Dask Ray
Primary Abstraction DataFrames, Arrays, Delayed tasks Remote functions, Actors
Best For Scalable data processing, machine learning pipelines Distributed machine learning training, tuning, and serving
Ease of Use High for Pandas/NumPy users Moderate, more boilerplate
Ecosystem Integrates with scikit-learn, XGBoost Built-in libraries: Tune, Serve, RLlib
Scalability Very good for batch processing Excellent, more control and flexibility
Scheduling Work-stealing scheduler Dynamic, actor-based scheduler
Cluster Management Native or via Kubernetes, YARN Ray Dashboard, Kubernetes, AWS, GCP
Community/Maturity Older, mature, widely adopted Growing fast, strong machine learning support

 

When to Use What?

 
Choose Dask if you:

  • Use Pandas/NumPy and want scalability
  • Process tabular or array-like data
  • Perform batch ETL or feature engineering
  • Need dataframe or array abstractions with lazy execution

Choose Ray if you:

  • Need to run many independent Python functions in parallel
  • Want to build machine learning pipelines, serve models, or manage long-running tasks
  • Need microservice-like scaling with stateful tasks

 

Ecosystem Tools

 
Both libraries offer or support a range of tools to cover the data science lifecycle, but with different emphasis:

 

Task Dask Ray
DataFrames dask.dataframe Modin (built on Ray or Dask)
Arrays dask.array No native support, rely on NumPy
Hyperparameter tuning Manual or with Dask-ML Ray Tune (advanced features)
Machine learning pipelines dask-ml, custom workflows Ray Train, Ray Tune, Ray AIR
Model serving Custom Flask/FastAPI setup Ray Serve
Reinforcement Learning Not supported RLlib
Dashboard Built-in, very detailed Built-in, simplified

 

Real-World Scenarios

 

// Large-Scale Data Cleaning and Feature Engineering

Use Dask.

Why? Dask integrates smoothly with pandas and NumPy. Many data teams already use these tools. If your dataset is too large to fit in memory, Dask can split it into smaller parts and process these parts in parallel. This helps with tasks like cleaning data and creating new features.

Example:

import dask.dataframe as dd
import numpy as np

df = dd.read_csv('s3://data/large-dataset-*.csv')
df = df[df['amount'] > 100]
df['log_amount'] = df['amount'].map_partitions(np.log)
df.to_parquet('s3://processed/output/')

 

This code reads multiple large CSV files from an S3 bucket using Dask in parallel. It filters rows where the amount column is greater than 100, applies a log transformation, and saves the result as Parquet files.

 

// Parallel Hyperparameter Tuning for Machine Learning Models

Use Ray.

Why? Ray Tune is great for trying different settings when training machine learning models. It integrates with tools like PyTorch and XGBoost, and it can stop bad runs early to save time.

Example:

from ray import tune
from ray.tune.schedulers import ASHAScheduler

def train_fn(config):
    # Model training logic here
    ...

tune.run(
    train_fn,
    config={"lr": tune.grid_search([0.01, 0.001, 0.0001])},
    scheduler=ASHAScheduler(metric="accuracy", mode="max")
)

 

This code defines a training function and uses Ray Tune to test different learning rates in parallel. It automatically schedules and evaluates the best configuration using the ASHA scheduler.

 

// Distributed Array Computations

Use Dask.

Why? Dask arrays are helpful when working with large sets of numbers. It splits the array into blocks and processes them in parallel.

Example:

import dask.array as da

x = da.random.random((10000, 10000), chunks=(1000, 1000))
y = x.mean(axis=0).compute()

 

This code creates a large random array divided into chunks that can be processed in parallel. It then calculates the mean of each column using Dask’s parallel computing power.

 

// Building an End-to-End Machine Learning Service

Use Ray.

Why? Ray is designed not just for model training but also for serving and lifecycle management. With Ray Serve, you can deploy models in production, run preprocessing logic in parallel, and even scale stateful actors.

Example:

from ray import serve

@serve.deployment
class ModelDeployment:
    def __init__(self):
        self.model = load_model()

    def __call__(self, request_body):
        data = request_body
        return self.model.predict([data])[0]

serve.run(ModelDeployment.bind())

 

This code defines a class to load a machine learning model and serve it through an API using Ray Serve. The class receives a request, makes a prediction using the model, and returns the result.

 

Final Recommendations

 

Use Case Recommended Tool
Scalable data analysis (Pandas-style) Dask
Large-scale machine learning training Ray
Hyperparameter optimization Ray
Out-of-core DataFrame computation Dask
Real-time machine learning model serving Ray
Custom pipelines with high parallelism Ray
Integration with PyData Stack Dask

 

Conclusion

 
Ray and Dask are both tools that help data scientists handle large amounts of data and run programs faster. Ray is good for tasks that need a lot of flexibility, like machine learning projects. Dask is useful if you want to work with big datasets using tools similar to Pandas or NumPy.

Which one you choose depends on what your project needs and the type of data you have. It’s a good idea to try both on small examples to see which one fits your work better.
 
 

Jayita Gulati is a machine learning enthusiast and technical writer driven by her passion for building machine learning models. She holds a Master’s degree in Computer Science from the University of Liverpool.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Jobs & Careers

Databricks Invests in Naveen Rao’s New AI Hardware Startup

Published

on


Ali Ghodsi, CEO and Co-Founder of Databricks, announced in a LinkedIn post on September 13 that the company is investing in a new AI hardware startup launched by Naveen Rao, former vice president of AI at Databricks.

Details of the company’s name, funding size, and product roadmap have not been disclosed yet.

“Over six months ago, Naveen Rao and I started discussing the potential to have a massive impact on the world of AI,” Ghodsi wrote. “Today, I’m excited to share that Naveen Rao is starting a company that I think has the potential to revolutionise the AI hardware space in fundamental ways.”

Rao, who previously founded Nervana (acquired by Intel) and MosaicML (acquired by Databricks), said the new project will focus on energy-efficient computing for AI. 

“The new project is about rethinking the foundations of compute with respect to AI to build a new machine that is vastly more power efficient. Brain Scale Efficiency!” he said.

Ghodsi highlighted Rao’s track record in entrepreneurship and his contributions at Databricks. “If anyone can pull this off, it’s Naveen,” he noted, adding that Rao will continue advising Databricks while leading the new venture.

Databricks has closed a $10 billion Series J funding round, raising its valuation to $62 billion. The company’s revenue is approaching a $3 billion annual run rate, with forecasts indicating it could turn free cash flow positive by late 2024.

Growth is being fueled by strong adoption of the Databricks Data Intelligence Platform, which integrates generative AI accelerators. The platform is seeing rapid uptake across enterprises, positioning Databricks as one of the leading players in the enterprise AI stack.

Rao described the move as an example of Databricks supporting innovation in the AI ecosystem. “I’m very proud of all the work we did at Mosaic and Databricks and love to see how Databricks will be driving the frontier of AI in the enterprise,” he said.



Source link

Continue Reading

Jobs & Careers

OpenAI Announces Grove, a Cohort for ‘Pre-Idea Individuals’ to Build in AI 

Published

on


OpenAI announced a new program called Grove on September 12, which is aimed at assisting technical talent at the very start of their journey in building startups and companies. 

The ChatGPT maker says that it isn’t a traditional startup accelerator program, and offers ‘pre-idea’ individuals access to a dense talent network, which includes OpenAI’s researchers, and other resources to build their ideas in the AI space. 

The program will begin with five weeks of content hosted in OpenAI’s headquarters in San Francisco, United States. This includes in-person workshops, weekly office hours, and mentorship with OpenAI’s leaders. The first Grove cohort will consist of approximately 15 participants, and OpenAI is recommending individuals from all domains and disciplines across various experience levels. 

“In addition to technical support and community, participants will also have the opportunity to get hands-on with new OpenAI tools and models before general availability,” said OpenAI in the blog post. 

Once the program is completed, the company says that participants will be able to explore opportunities to explore capital or pursue other avenues, internal or external to OpenAI. Interested applicants can fill out the form on OpenAI’s website by September 24. 

Grove is in addition to other programs such as ‘Pioneers’ and ‘OpenAI for Startups’, which were announced earlier this year. 

The OpenAI Pioneers program is an initiative that deploys AI to real-world use cases by assisting companies that intend to do so. OpenAI’s research teams will collaborate with these companies to solve the problems and expand their capabilities. 

On the other hand, OpenAI for startups is an initiative designed to provide founders with AI tools, resources, and community support to scale their AI products. For instance, the program includes ‘live build hours’ where engineers from OpenAI provide hands-on demos, webinars, access to code repositories, ask me anything (AMA) sessions, case studies, and more. 

It also includes real-life meetups, events, and more to assist founders in their journey. If startups are backed by venture capital firms that are partners of OpenAI (Thrive Capital, Sequoia, a16z, Kleiner Perkins, and Conviction Partners), they are eligible for free API credits, rate limit upgrades, and interactions with the company’s team members, alongside invites to exclusive events. 



Source link

Continue Reading

Jobs & Careers

Uncommon Uses of Common Python Standard Library Functions

Published

on


Uncommon Uses of Common Python Standard Library Functions
Image by Author | Ideogram

 

Introduction

 
You know the basics of Python’s standard library. You’ve probably used functions like zip() and groupby() to handle everyday tasks without fuss. But here’s what most developers miss: these same functions can solve surprisingly “uncommon” problems in ways you’ve probably never considered. This article explains some of these uses of familiar Python functions.

🔗 Link to the code on GitHub

 

1. itertools.groupby() for Run-Length Encoding

 
While most developers think of groupby() as a simple tool for grouping data logically, it’s also useful for run-length encoding — a compression technique that counts consecutive identical elements. This function naturally groups adjacent matching items together, so you can transform repetitive sequences into compact representations.

from itertools import groupby

# Analyze user activity patterns from server logs
user_actions = ['login', 'login', 'browse', 'browse', 'browse',
                'purchase', 'logout', 'logout']

# Compress into pattern summary
activity_patterns = [(action, len(list(group)))
                    for action, group in groupby(user_actions)]

print(activity_patterns)

# Calculate total time spent in each activity phase
total_duration = sum(count for action, count in activity_patterns)
print(f"Session lasted {total_duration} actions")

 

Output:

[('login', 2), ('browse', 3), ('purchase', 1), ('logout', 2)]
Session lasted 8 actions

 

The groupby() function identifies consecutive identical elements and groups them together. By converting each group to a list and measuring its length, you get a count of how many times each action occurred in sequence.

 

2. zip() with * for Matrix Transposition

 
Matrix transposition — flipping rows into columns — becomes simple when you combine zip() with Python’s unpacking operator.

The unpacking operator (*) spreads your matrix rows as individual arguments to zip(), which then reassembles them by taking corresponding elements from each row.

# Quarterly sales data organized by product lines
quarterly_sales = [
    [120, 135, 148, 162],  # Product A by quarter
    [95, 102, 118, 125],   # Product B by quarter
    [87, 94, 101, 115]     # Product C by quarter
]

# Transform to quarterly view across all products
by_quarter = list(zip(*quarterly_sales))
print("Sales by quarter:", by_quarter)

# Calculate quarterly growth rates
quarterly_totals = [sum(quarter) for quarter in by_quarter]
growth_rates = [(quarterly_totals[i] - quarterly_totals[i-1]) / quarterly_totals[i-1] * 100
                for i in range(1, len(quarterly_totals))]
print(f"Growth rates: {[f'{rate:.1f}%' for rate in growth_rates]}")

 

Output:

Sales by quarter: [(120, 95, 87), (135, 102, 94), (148, 118, 101), (162, 125, 115)]
Growth rates: ['9.6%', '10.9%', '9.5%']

 

We unpack the lists first, and then the zip() function groups the first elements from each list, then the second elements, and so on.

 

3. bisect for Maintaining Sorted Order

 
Keeping data sorted as you add new elements typically requires expensive re-sorting operations, but the bisect module maintains order automatically using binary search algorithms.

The module has functions that help find the exact insertion point for new elements in logarithmic time, then place them correctly without disturbing the existing order.

import bisect

# Maintain a high-score leaderboard that stays sorted
class Leaderboard:
    def __init__(self):
        self.scores = []
        self.players = []

    def add_score(self, player, score):
        # Insert maintaining descending order
        pos = bisect.bisect_left([-s for s in self.scores], -score)
        self.scores.insert(pos, score)
        self.players.insert(pos, player)

    def top_players(self, n=5):
        return list(zip(self.players[:n], self.scores[:n]))

# Demo the leaderboard
board = Leaderboard()
scores = [("Alice", 2850), ("Bob", 3100), ("Carol", 2650),
          ("David", 3350), ("Eva", 2900)]

for player, score in scores:
    board.add_score(player, score)

print("Top 3 players:", board.top_players(3))

 

Output:

Top 3 players: [('David', 3350), ('Bob', 3100), ('Eva', 2900)]

 

This is useful for maintaining leaderboards, priority queues, or any ordered collection that grows incrementally over time.

 

4. heapq for Finding Extremes Without Full Sorting

 
When you need only the largest or smallest elements from a dataset, full sorting is inefficient. The heapq module uses heap data structures to efficiently extract extreme values without sorting everything.

import heapq

# Analyze customer satisfaction survey results
survey_responses = [
    ("Restaurant A", 4.8), ("Restaurant B", 3.2), ("Restaurant C", 4.9),
    ("Restaurant D", 2.1), ("Restaurant E", 4.7), ("Restaurant F", 1.8),
    ("Restaurant G", 4.6), ("Restaurant H", 3.8), ("Restaurant I", 4.4),
    ("Restaurant J", 2.9), ("Restaurant K", 4.2), ("Restaurant L", 3.5)
]

# Find top performers and underperformers without full sorting
top_rated = heapq.nlargest(3, survey_responses, key=lambda x: x[1])
worst_rated = heapq.nsmallest(3, survey_responses, key=lambda x: x[1])

print("Excellence awards:", [name for name, rating in top_rated])
print("Needs improvement:", [name for name, rating in worst_rated])

# Calculate performance spread
best_score = top_rated[0][1]
worst_score = worst_rated[0][1]
print(f"Performance range: {worst_score} to {best_score} ({best_score - worst_score:.1f} point spread)")

 

Output:

Excellence awards: ['Restaurant C', 'Restaurant A', 'Restaurant E']
Needs improvement: ['Restaurant F', 'Restaurant D', 'Restaurant J']
Performance range: 1.8 to 4.9 (3.1 point spread)

 

The heap algorithm maintains a partial order that efficiently tracks extreme values without organizing all data.

 

5. operator.itemgetter for Multi-Level Sorting

 
Complex sorting requirements often lead to convoluted lambda expressions or nested conditional logic. But operator.itemgetter provides an elegant solution for multi-criteria sorting.

This function creates key extractors that pull multiple values from data structures, enabling Python’s natural tuple sorting to handle complex ordering logic.

from operator import itemgetter

# Employee performance data: (name, department, performance_score, hire_date)
employees = [
    ("Sarah", "Engineering", 94, "2022-03-15"),
    ("Mike", "Sales", 87, "2021-07-22"),
    ("Jennifer", "Engineering", 91, "2020-11-08"),
    ("Carlos", "Marketing", 89, "2023-01-10"),
    ("Lisa", "Sales", 92, "2022-09-03"),
    ("David", "Engineering", 88, "2021-12-14"),
    ("Amanda", "Marketing", 95, "2020-05-18")
]

sorted_employees = sorted(employees, key=itemgetter(1, 2))
# For descending performance within department:
dept_performance_sorted = sorted(employees, key=lambda x: (x[1], -x[2]))

print("Department performance rankings:")
current_dept = None
for name, dept, score, hire_date in dept_performance_sorted:
    if dept != current_dept:
        print(f"\n{dept} Department:")
        current_dept = dept
    print(f"  {name}: {score}/100")

 

Output:

Department performance rankings:

Engineering Department:
  Sarah: 94/100
  Jennifer: 91/100
  David: 88/100

Marketing Department:
  Amanda: 95/100
  Carlos: 89/100

Sales Department:
  Lisa: 92/100
  Mike: 87/100

 

The itemgetter(1, 2) function extracts the department and performance score from each tuple, creating composite sorting keys. Python’s tuple comparison naturally sorts by the first element (department), then by the second element (score) for items with matching departments.

 

6. collections.defaultdict for Building Data Structures on the Fly

 
Creating complex nested data structures typically requires tedious existence checking before adding values, leading to repetitive conditional code that obscures your actual logic.

The defaultdict eliminates this overhead by automatically creating missing values using factory functions you specify.

from collections import defaultdict

books_data = [
    ("1984", "George Orwell", "Dystopian Fiction", 1949),
    ("Dune", "Frank Herbert", "Science Fiction", 1965),
    ("Pride and Prejudice", "Jane Austen", "Romance", 1813),
    ("The Hobbit", "J.R.R. Tolkien", "Fantasy", 1937),
    ("Foundation", "Isaac Asimov", "Science Fiction", 1951),
    ("Emma", "Jane Austen", "Romance", 1815)
]

# Create multiple indexes simultaneously
catalog = {
    'by_author': defaultdict(list),
    'by_genre': defaultdict(list),
    'by_decade': defaultdict(list)
}

for title, author, genre, year in books_data:
    catalog['by_author']Bala Priya C.append((title, year))
    catalog['by_genre'][genre].append((title, author))
    catalog['by_decade'][year // 10 * 10].append((title, author))

# Query the catalog
print("Jane Austen books:", dict(catalog['by_author'])['Jane Austen'])
print("Science Fiction titles:", len(catalog['by_genre']['Science Fiction']))
print("1960s publications:", dict(catalog['by_decade']).get(1960, []))

 

Output:

Jane Austen books: [('Pride and Prejudice', 1813), ('Emma', 1815)]
Science Fiction titles: 2
1960s publications: [('Dune', 'Frank Herbert')]

 

The defaultdict(list) automatically creates empty lists for any new key you access, eliminating the need to check if key not in dictionary before appending values.

 

7. string.Template for Safe String Formatting

 
Standard string formatting methods like f-strings and .format() fail when expected variables are missing. But string.Template keeps your code running even with incomplete data. The template system leaves undefined variables in place rather than crashing.

from string import Template

report_template = Template("""
=== SYSTEM PERFORMANCE REPORT ===
Generated: $timestamp
Server: $server_name

CPU Usage: $cpu_usage%
Memory Usage: $memory_usage%
Disk Space: $disk_usage%

Active Connections: $active_connections
Error Rate: $error_rate%

${detailed_metrics}

Status: $overall_status
Next Check: $next_check_time
""")

# Simulate partial monitoring data (some sensors might be offline)
monitoring_data = {
    'timestamp': '2024-01-15 14:30:00',
    'server_name': 'web-server-01',
    'cpu_usage': '23.4',
    'memory_usage': '67.8',
    # Missing: disk_usage, active_connections, error_rate, detailed_metrics
    'overall_status': 'OPERATIONAL',
    'next_check_time': '15:30:00'
}

# Generate report with available data, leaving gaps for missing info
report = report_template.safe_substitute(monitoring_data)
print(report)
# Output shows available data filled in, missing variables left as $placeholders
print("\n" + "="*50)
print("Missing data can be filled in later:")
additional_data = {'disk_usage': '45.2', 'error_rate': '0.1'}
updated_report = Template(report).safe_substitute(additional_data)
print("Disk usage now shows:", "45.2%" in updated_report)

 
Output:

=== SYSTEM PERFORMANCE REPORT ===
Generated: 2024-01-15 14:30:00
Server: web-server-01

CPU Usage: 23.4%
Memory Usage: 67.8%
Disk Space: $disk_usage%

Active Connections: $active_connections
Error Rate: $error_rate%

${detailed_metrics}

Status: OPERATIONAL
Next Check: 15:30:00


==================================================
Missing data can be filled in later:
Disk usage now shows: True

 

The safe_substitute() method processes available variables while preserving undefined placeholders for later completion. This creates fault-tolerant systems where partial data produces meaningful partial results rather than complete failure.

This approach is useful for configuration management, report generation, email templating, or any system where data arrives incrementally or might be temporarily unavailable.

 

Conclusion

 
The Python standard library contains solutions to problems you didn’t know it could solve. What we discussed here shows how familiar functions can handle non-trivial tasks.

Next time you start writing a custom function, pause and explore what’s already available. The tools in the Python standard library often provide elegant solutions that are faster, more reliable, and require zero additional setup.

Happy coding!
 
 

Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.





Source link

Continue Reading

Trending