Connect with us

AI Research

Bioinspired trajectory modulation for effective slip control in robot manipulation

Published

on


Here we delve into the implementation of proactive trajectory modulation as a means of slip control, which addresses our first hypothesis. We present the methodology and experimental setup used to examine the effectiveness of trajectory modulation in preventing slip incidents during robotic manipulation tasks. Furthermore, we introduce the concept of incorporating a forward model in proactive control, an approach aimed at slip avoidance, as per our second research hypothesis. We outline the key aspects of proactive control, its implementation and the experimental framework used to evaluate its performance in slip prevention.

Experimental setup

Traditional parallel jaw grippers are still widely used in many robotic manipulation tasks, such as bin picking 38. Usually, a motion planning module generates a reference trajectory for the robot by minimizing, for example, time or jerk before motion execution. Then, the robot executes them in an open-loop manner with respect to task failure/success due to a slip in real time. This limits the robot’s success, and solutions in the literature suggest increased grip force on the fly, which may not be feasible in many cases39,40,41. We address this issue using our data-driven predictive control concept, referred to as proactive control.

First, we collected a dataset of open-loop trajectory executions in which the robot manipulates an object included in our train objects set shown in Supplementary Table 2. We equipped the fingers of a Franka Emika robotic arm with a pair of 4 × 4 uSkin tactile sensors and performed moving tasks with multiple objects. The uSkin tactile sensor has 16 sensing points (taxels) that measure triaxial forces, including shear x, shear y and normal z (Fig. 1). To create various types of reference trajectory, we used two main motion strategies during data collection: (1) kinaesthetic robot motions, where a human user performed moving tasks with qualitative fast and slow motions; (2) automatic robot motions, where trapezoidal reference trajectories with various acceleration/deceleration values were sent as a reference Cartesian velocity. A D435i camera manufactured by Intel RealSense mounted on the robot’s wrist measures the pose of an ArUco marker attached to the top of the object. The pose data are then post-processed to create binary slip labels based on the object’s in-hand displacement.

ACTP

In this work, tactile data are referred to as frames, representing single time-step readings of the respective sensing modality. Given (1) a set of context frames f0:c−1 = {f0fc−1}, which are the previously observed frames with a context sequence length of c, and (2) a prediction horizon of length H – c (that is, the number of future frames to predict); here we assume equal context and prediction window size, that is, H = 2c, where \({i}_{c}\in \{0,\ldots ,c-1\}\in {{\mathbb{Z}}}^{c}\), \({i}_{p}\in \{c,\ldots ,H-1\}\in {{\mathbb{Z}}}^{c}\) and \(i\in \{0,\ldots ,c-1\}\cup \{c,\ldots ,H-1\}\in {{\mathbb{Z}}}^{H}\) (Fig. 3b shows details of the samples in the context and prediction horizons). Although here we show the formulation for one time step f(t), generalization for training for the entire trajectory, that is, t = 0:T, is straightforward. A tactile prediction model can be defined as f (we use *i:i+n = {*i, *i+1…*i+n}; Fig. 3b) to denote variables representing the set of frames and \(\hat{{\bf{f}}}\) to denote the corresponding predicted values:

$${\hat{{\bf{f}}}}_{c:H-1}={\mathcal{F}}\left({{\bf{f}}}_{0:c-1}\right),$$

(1)

where \({\hat{{\bf{f}}}}_{c:H-1}\) represents a set of predicted frames \({\hat{{\bf{f}}}}_{i}\in {{\mathbb{R}}}^{48}\) (tactile images). The goal is to optimize equation (2), for each time step in the prediction horizon counted by ip:

$$\min \mathop{\sum }\limits_{i=c}^{H-1}{\mathcal{D}}\left({\hat{{\bf{f}}}}_{i},{{\bf{f}}}_{i}\right),$$

(2)

where \({\mathcal{D}}\) denotes the loss function in the tactile reading space or pixel space, such as \({{\mathcal{L}}}_{1}\) or \({{\mathcal{L}}}_{2}\), measuring the difference between the predicted and observed frames.

In a physical robot interaction, we aim to develop a cause–effect understanding of the robot’s actions. Thus, we condition the prediction model on the past context frames f0:c−1, the past robot trajectory x0:c−1 and planned robot actions ac−1:H−2 to output future frames \({\hat{{\bf{f}}}}_{c:H-1}\) (which are known in our physical robot interaction datasets but unknown during the inference time). Here \({{\bf{x}}}_{{i}_{c}}\in {{\mathbb{R}}}^{6}\) represents the robot trajectory at past steps, and \({{\bf{a}}}_{{i}_{p}}\in {{\mathbb{R}}}^{6}\) represents the planned future robot actions at time t. The model assumes a known and nearly constant sampling frequency (typically around 10 Hz with less than 10% variance), and we work with discrete values. The prediction model can be expressed as

$${\hat{{\bf{f}}}}_{c:H-1}={\mathcal{F}}\left({{\bf{f}}}_{0:c-1},{{\bf{x}}}_{0:c-1},{{\bf{a}}}_{c-1:H-2}\right),$$

(3)

where f denotes tactile frames. In model predictive control, which commonly uses forward prediction models like those described here, future robot actions are considered as a batch of candidate actions. The optimal action is selected by a discriminator based on the most desirable predicted tactile frames34,42.

Our model aims to maximize the \(p\left(\right.{\hat{{\bf{f}}}}_{c:H-1}| {{\bf{f}}}_{0:c-1},\)\(\{{{\bf{x}}}_{0:c-1},{{\bf{a}}}_{c-1:H-2}\},\) \(\left.{{\bf{z}}}_{0:c}\right)\) to predict tactile frame \({\hat{{\bf{f}}}}_{c:T}\) by applying stochastic assumption to the prediction model. Our objective is to sample from

$$p\left({\hat{{\bf{f}}}}_{c:H-1}| {{\bf{f}}}_{0:c-1},\{{{\bf{x}}}_{0:c-1},{{\bf{a}}}_{c-1:H-2}\}\right).$$

(4)

The network can then be trained with the frame prediction model by maximizing equation (4).

$$\begin{array}{ll}{{\mathcal{L}}}_{\theta }({\bf{f}}\;)&=\mathop{\sum }\nolimits_{i = c}^{H-1}-\left[\,\text{log}\,{p}_{\theta }\left({\hat{{\bf{f}}}}_{i:H-1}| {{\bf{f}}}_{0:i-1},\{{{\bf{x}}}_{0:c-1},{{\bf{a}}}_{c-1:i-1}\}\right)\right]\end{array}$$

(5)

For more information about action-conditioned prediction models, see refs. 35,43,44. The LSTM classifier takes the predicted tactile frames and classifies each predicted time step as either slip or non-slip. The resulting slip signal, where the binary value of 0 indicates no slip and 1 indicates slip, is then used in the model predictive control framework to adjust the robot’s movements accordingly.

Slip classification model

To classify tactile frames as slip and non-slip signals, we leverage the temporal processing capabilities of LSTM networks, which have been shown to significantly enhance the classification performance by incorporating historical tactile features, compared with traditional classification methods such as support vector machines45,46.

Formally, we define the slip classification task as mapping the predicted tactile state sequence \({\hat{{\bf{f}}}}_{c:H-1}\) to a sequence of slip classifications \({\hat{{\bf{s}}}}_{c:H-1}\), where each \({\hat{{\bf{s}}}}_{i}\in \{{c}_{{\text{slip}}},{c}_{{\text{non-slip}}}\}\) denotes the slip status at time step i within the prediction window. The LSTM model is trained to learn the temporal dependencies in the tactile data, enabling it to effectively differentiate between slip and non-slip conditions.

Given a sequence of predicted tactile states \({\hat{{\bf{f}}}}_{c:H-1}\), the LSTM-based slip classifier processes these states sequentially, with the LSTM unit updates defined as follows:

$${{\bf{h}}}_{i},{{\bf{c}}}_{i}=\,\text{LSTM}\,\left({\hat{{\bf{f}}}}_{i},{{\bf{h}}}_{i-1},{{\bf{c}}}_{i-1}\right),\qquad i=c:H-1,$$

(6)

where hi and ci are the hidden and cell states at time step i, respectively. The output of the LSTM at each time step is passed through a fully connected layer to produce the slip classification logits, which are then converted into probabilities using a sigmoid activation function:

$${{\bf{s}}}_{i}=\sigma ({{\bf{W}}}_{s}{{\bf{h}}}_{i}+{{\bf{b}}}_{s}),\qquad i=c:H-1,$$

(7)

where σ is the sigmoid function, and Ws and bs are the weight and bias of the output layer, respectively. The final slip classification \({\hat{{\bf{s}}}}_{i}\) is obtained by thresholding the probability output, assigning it to either the slip or non-slip class.

The architecture of the LSTM-based slip classification model is depicted in Fig. 2. Building on the work in ref. 34, which demonstrated that a simple LSTM-based tactile forward model combined with a slip classifier outperforms classifiers labelled with future slip instances, we use a state-of-the-art tactile forward model35 to estimate tactile states over the future prediction horizon and classify these predicted states accordingly.

To determine the stability of the object in the robot’s grip, the slip classification model primarily utilizes the shear force components from the predicted tactile states. To enhance the model’s generalization ability, we incorporate two dropout layers with a dropout probability of 0.5.

The slip classification dataset is imbalanced, comprising 16% slip instances and 84% non-slip instances. To address this imbalance, we train the classification model using the binary cross-entropy loss function, with a weighting scheme that penalizes slip instances more heavily than non-slip cases, using a relative weight of 3:1. This approach helps ensure that the classifier is conservative, with a strong tendency to predict actual slip instances as positive, leading to high recall rates, as reflected in the metrics in Supplementary Table 1 (right) for most of the train and test objects.

Slip control using trajectory modulation

Our proactive control approach, based on our previous work34, consists of a tactile forward model, a slip classifier and a predictive controller. A detailed presentation of model architecture and hyperparameters settings of ACTP and the slip classification models is included in Supplementary Section 1. We have made several improvements to the original approach that have led to substantial performance gains. First, we have incorporated our ACTP model to improve the accuracy of the slip classifier (Supplementary Table 1 provides the results). Second, we have extended the control variable from one DOF to six DOFs. This control strategy regulates the distance to the reference trajectory (which we call residual values), rather than learning the Cartesian velocity, as that in ref. 47. This results in improved optimization and better convergence and generalization. Additionally, we have expanded the object and trajectory sets, further improving the generalization of the approach. We denote the Cartesian-space velocity vector of the robot as \(\overrightarrow{V}=({V}_{x},{V}_{y},{V}_{z},\,{W}_{x},{W}_{y},{W}_{z})\).

In the trajectory modulation loop, the first three components of the control vector represent the translational velocities along the Cartesian coordinate axes, whereas the last three components represent the angular velocities around those axes. Our goal is to minimize the future slip likelihood \(L({s}_{c,\ldots ,H-1}| \alpha ,\beta ,{\bf{x}},{\bf{a}})=\mathop{\prod }\nolimits_{i = c}^{H-1}f({s}_{i}| \alpha ,\beta ,{{\bf{x}}}_{0:i},{{\bf{a}}}_{i})\) by learning the optimal deviation from the reference velocity profile, where α, β, x and a represent the tactile forward model and slip classifier parameters, past tactile states and planned robot actions, respectively. si indicates the ith observed slip value within a prediction horizon of length c. We choose spherical coordinates as the robot’s input {ac−1aH−2} to adapt a given reference trajectory. As such, we modify the reference velocity vector by separately adjusting its norm and direction. To facilitate this modification, we represent the translational and angular velocities in spherical coordinates as follows.

$$\begin{array}{l}{\rho }_{v}=\sqrt{{V}_{x}^{2}+{V}_{y}^{2}+{V}_{z}^{2}};\,{\theta }_{v}={\tan }^{-1}\frac{{V}_{y}}{{V}_{x}};\,{\phi }_{v}={\cos }^{-1}\left(\frac{{V}_{z}}{\sqrt{{V}_{x}^{2}+{V}_{y}^{2}+{V}_{z}^{2}}}\right)\end{array}$$

$$\begin{array}{l}{\rho }_{w}=\sqrt{{W}_{x}^{2}+{W}_{y}^{2}+{W}_{z}^{2}};\,{\theta }_{w}={\tan }^{-1}\frac{{W}_{y}}{{W}_{x}};\,{\phi }_{w}={\cos }^{-1}\left(\frac{{W}_{z}}{\sqrt{{W}_{x}^{2}+{W}_{y}^{2}+{W}_{z}^{2}}}\right)\end{array}$$

(8)

The variables ρ, θ and ϕ correspond to the norm, polar angle and azimuthal angle, respectively, in spherical coordinates. The relationship between the spherical and rectangular coordinates is illustrated in Extended Data Fig. 1. Using spherical coordinates enables the optimization process to separately learn the modifications needed for the norm and direction of the velocity vector in space. The optimization process returns the residual values, which are added to the components in equation (8) to modify the reference velocity profile.

$$\begin{array}{rcl}{\rho }_{v}^{* }&=&{\rho }_{v}+{\zeta }_{{\rho }_{v}};\,{\theta }_{v}^{* }={\theta }_{v}+{\zeta }_{{\theta }_{v}};\,{\phi }_{v}^{* }={\phi }_{v}+{\zeta }_{{\phi }_{v}}\\{\rho }_{w}^{* }&=&{\rho }_{w}+{\zeta }_{{\rho }_{w}};\,{\theta }_{w}^{* }={\theta }_{w}+{\zeta }_{{\theta }_{w}};\,{\phi }_{w}^{* }={\phi }_{w}+{\zeta }_{{\phi }_{w}}\end{array}$$

(9)

We express the residual values of equation (9) in the matrix form as \({\zeta }^{{\rm{d}}}=({\zeta }_{{\rho }_{v}},{\zeta }_{{\theta }_{v}},{\zeta }_{{\phi }_{v}},{\zeta }_{{\rho }_{w}},{\zeta }_{{\theta }_{w}},{\zeta }_{{\phi }_{w}})\). The residual trajectory values form the difference between the executed robot trajectory ζe and the reference trajectory ζr are \({\zeta }_{c:H-1}^{\;{\rm{d}}}={\zeta }_{c:H-1}^{\;{\rm{e}}}-{\zeta }_{c:H-1}^{\;{\rm{r}}}\) and \({\zeta }_{c:H-1}^{{\rm{e}}}=\{{{\bf{a}}}_{c-1}\ldots {{\bf{a}}}_{H-2}\}\), where \({\bf{a}}\in {{\mathbb{R}}}^{6}\) and \(\zeta \in {{\mathbb{R}}}^{6\times c}\).

Parameterized residual learning for trajectory adaptation

To optimize the robot’s trajectories over a future time horizon (that is, \({i}_{{\rm{p}}}\in \{c,\ldots H-1\}\in {{\mathbb{Z}}}^{c}\)), we represent the residual values using a parametric action representation48 that includes a weight parameter matrix w to be computed by the optimizer using Gaussian basis functions49, which is denoted by Φ: \({\zeta }_{c:H-1}^{\;{\rm{d}}}={{\bf{w}}}^{{\rm{T}}}\times \varPhi\), where \(\varPhi \in {{\mathbb{R}}}^{n\times c}\) and \({\bf{w}}\in {{\mathbb{R}}}^{c\times 6}\). For the sake of simplicity, we express ζc:H−1 by ζ.

The parameterized action space representation reduces computation complexity compared with direct search in continuous action space48. The benefit is more effective when the action is optimized in a future time horizon rather than a single time step due to the fewer search parameters in the optimization problem. We close the loop for the slip prevention controller by solving the constraint optimization in equation (10).

$$\begin{array}{r}\mathop{{\rm{arg}}\,{\rm{min}}}\limits_{{\bf{w}}}\,{\parallel {\zeta }^{{\rm{d}}}({\bf{w}},\varPhi )\parallel }_{2}\\ \,\text{Subject to}\,\,\,{{\mathbb{E}}}_{{{\bf{f}}}_{c:H-1}| ({{\bf{f}}}_{0:c-1},{\zeta }_{c:H-1}^{\;{\rm{e}}},{\zeta }_{0:c}^{\;{\rm{e}}})}[{s}_{c:H-1}({\zeta }^{{\rm{e}}}({\bf{w}},\varPhi ))]\,=\,0 \\ \,\,lb < {\dot{X}}_{i+1}-{\dot{X}}_{i}^{{\rm{obs}}} < ub\qquad \end{array},$$

(10)

where ζe = ζd(w, Φ) + ζr, and f denotes the tactile state vector \(\in {{\mathbb{R}}}^{48\times c}\). The first nonlinear constraint in the optimization formulation is based on the expected value of the slip signal over the prediction horizon c, given by \({\mathbb{E}}[{s}_{c:H-1}({\zeta }^{{\rm{e}}}({\bf{w}},\varPhi ))]\), where s is a function of the robot’s future trajectory. The second constraint limits the difference between the generated robot velocities and the observed velocity (measurement) to ensure compliance with the robot’s low-level controller maximum acceleration limit. The optimization problem seeks to minimize the expected slip over the prediction horizon and remaining close to the provided reference trajectory.

The resulting spherical velocity components are transformed back to rectangular coordinates before being sent to the robot’s Cartesian-velocity controller as per equation (11). The Vx, Vy, Vz, Wx, Wy and Wz values are used to update the robot’s Cartesian-velocity controller, which regulates the robot’s motion along the desired trajectory and avoiding slipping.

$${V}_{x}={\rho }_{v}\,\sin ({\phi }_{v})\,\cos ({\theta }_{v}),\,{V}_{y}={\rho }_{v}\,\sin ({\phi }_{v})\,\sin ({\theta }_{v}),\,{V}_{z}={\rho }_{v}\,\cos ({\phi }_{v}),$$

$${W}_{x}={\rho }_{w}\,\sin ({\phi }_{w})\,\cos ({\theta }_{w}),\,{W}_{y}={\rho }_{w}\,\sin ({\phi }_{w})\,\sin ({\theta }_{w}),\,{W}_{z}={\rho }_{w}\,\cos ({\phi }_{w}).$$

(11)

We provide a detailed presentation of the grip force control method as the baseline controller for benchmarking our proactive controller in the Supplementary Information.

Training and testing

We trained our ACTP model and slip classification model offline using a dataset of pick-and-move tasks. Figure 2 illustrates the forward model and its architecture within a predictive control pipeline50.

The manipulation dataset consists of 420,000 data samples collected from 600 manipulation trials involving 13 box-shaped objects. Each object (including both train and test set objects; Supplementary Tables 1 and 2 list the dataset details) was involved in an equal number of trials, with three objects reserved as test objects, which were not seen during training. The performance of the ACTP and slip classification models was validated by analysing the mean absolute error and F-score values for the unseen objects, respectively. All models were trained on a Ubuntu machine equipped with an AMD Ryzen Threadripper CPU, NVIDIA GeForce RTX 2080 GPU and 64 GB of memory. The training was conducted using the PyTorch v. 1.13.1 library with CUDA v. 11.7. The trained ACTP and slip classification models are then used with fixed-weight parameters for real-time control tests.

During testing, the trajectory modulation module utilizes the inference from these two models in an online optimization loop to determine the optimal next robot action that minimizes the likelihood of slip occurrence in a receding horizon framework. We will now present the design details for each building block of the proactive control system (Fig. 2).

Objects, metrics and comparison method

Table 2 presents the objects used for training and testing our controller (Supplementary Table 1 provides the pictures of each object). It also details the performance metrics for both train and test objects. The train and test sets are based on the data collection for training the underlying tactile forward model and slip classifier in the proactive control, consisting of ten train objects and three test objects. The performance metrics for the slip controller include rotation of >6°, time steps (RTS) and maximum object rotation (MOR) in degrees. RTS represents the number of time steps during which the object’s rotation exceeded the slip classification threshold (6°). Smaller RTS values indicate better slip avoidance performance. The proactive control achieved excellent performance with no slip instances (RTS = 0) for five objects in the train set and two objects in the test set (boldface values in the RTS column). The MOR values show that for five objects, the maximum rotation slightly exceeded 6°, the threshold set for slip classification. This may be attributed to the imprecision of the forward model, the classifier (Supplementary Table 1) or the threshold we set on the number of iterations in the controller computations to find the optimal actions (as we allow only ten optimizer iterations due to real-time constraints). Nonetheless, MOR remains below 9°, preventing failure despite slip instances. Comparing the mean values of MOR and RTS for the train and test sets demonstrates that our proposed controller generalizes well to unseen objects during training and effectively avoids slip instances on average. DRT denotes the distance to the reference trajectory, calculated by summing the Euclidean distance between the reference and optimized trajectories across all task time steps (measured in m s−1). ROV represents the resulted optimality value, indicating the convergence of the optimization process at the final iteration.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.



Source link

AI Research

Databricks at a crossroads: Can its AI strategy prevail without Naveen Rao?

Published

on


“Databricks is in a tricky spot with Naveen Rao stepping back. He was not just a figurehead, but deeply involved in shaping their AI vision, particularly after MosaicML,” said Robert Kramer, principal analyst at Moor Insights & Strategy.

“Rao’s absence may slow the pace of new innovation slightly, at least until leadership stabilizes. Internal teams can keep projects on track, but vision-driven leaps, like identifying the ‘next MosaicML’, may be harder without someone like Rao at the helm,” Kramer added.

Rao became a part of Databricks in 2023 after the data lakehouse provider acquired MosaicML, a company Rao co-founded, for $1.3 billion. During his tenure, Rao was instrumental in leading research for many Databricks products, including Dolly, DBRX, and Agent Bricks.



Source link

Continue Reading

AI Research

NFL player props, odds: Week 2, 2025 NFL picks, SportsLine Machine Learning Model AI predictions, SGP

Published

on


The Under went 12-4 in Week 1, indicating that not only were there fewer points scored than expected, but there were also fewer yards gained. Backing the Under with NFL prop bets was likely profitable for the opening slate of games, but will that maintain with Week 2 NFL props? Interestingly though, four of the five highest-scoring games last week were the primetime games, so if that holds, then the Overs for this week’s night games could be attractive with Week 2 NFL player props.

There’s a Monday Night Football doubleheader featuring star pass catchers like Nico Collins, Mike Evans and Brock Bowers. The games also feature promising rookies such as Ashton Jeanty, Omarion Hampton and Emeka Egbuka. Prop lines are usually all over the place early in the season as sportsbooks attempt to establish a player’s potential, and you could take advantage of this with the right NFL picks. If you are looking for NFL prop bets or NFL parlays for Week 2, SportsLine has you covered with the top Week 2 player props from its Machine Learning Model AI.

Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop. 

Now, with the Week 2 NFL schedule quickly approaching, SportsLine’s Machine Learning Model AI has identified the top NFL props from the biggest Week 2 games.

Week 2 NFL props for Sunday’s main slate

After analyzing the NFL props from Sunday’s main slate and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model AI says Lions receiver Amon-Ra St. Brown goes Over 63.5 receiving yards (-114) versus the Bears at 1 p.m. ET. Detroit will host this contest, which is notable as St. Brown has averaged 114 receiving yards over his last six home games. He had at least 70 receiving yards in both matchups versus the Bears a year ago.

Chicago allowed 12 receivers to go Over 63.5 receiving yards last season as the Bears’ pass defense is adept at keeping opponents out of the endzone but not as good at preventing yardage. Chicago allowed the highest yards per attempt and second-highest yards per completion in 2024. While St. Brown had just 45 yards in the opener, the last time he was held under 50 receiving yards, he then had 193 yards the following week. The SportsLine Machine Learning Model projects 82.5 yards for St. Brown in a 4.5-star pick. See more Week 2 NFL props here.

Week 2 NFL props for Vikings vs. Falcons on Sunday Night Football

After analyzing Falcons vs. Vikings props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model AI says Falcons running back Bijan Robinson goes Over 65.5 rushing yards (-114). Robinson ran for 92 yards and a touchdown in Week 14 of last season versus Minnesota, despite the Vikings having the league’s No. 2 run defense a year ago. The SportsLine Machine Learning Model projects Robinson to have 81.8 yards on average in a 4.5-star prop pick. See more NFL props for Vikings vs. Falcons here

You can make NFL prop bets on Robinson, Justin Jefferson and others with the Underdog Fantasy promo code CBSSPORTS2. Pick at Underdog Fantasy and get $50 in bonus funds after making a $5 wager:

Week 2 NFL props for Buccaneers vs. Texans on Monday Night Football

After analyzing Texans vs. Buccaneers props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model AI says Bucs quarterback Baker Mayfield goes Under 235.5 passing yards (-114). While Houston has questions regarding its offense, there’s little worry about the team’s pass defense. In 2024, Houston had the second-most interceptions, the fourth-most sacks and allowed the fourth-worst passer rating. Since the start of last year, and including the playoffs, the Texans have held opposing QBs under 235.5 yards in 13 of 20 games. The SportsLine Machine Learning Model forecasts Mayfield to finish with just 200.1 passing yards, making the Under a 4-star NFL prop. See more NFL props for Buccaneers vs. Texans here

You can also use the latest FanDuel promo code to get $300 in bonus bets instantly:

Week 2 NFL props for Chargers vs. Raiders on Monday Night Football

After analyzing Raiders vs. Chargers props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model AI says Chargers quarterback Justin Herbert goes Under 254.5 passing yards (-114). The Raiders’ defense was underrated in preventing big passing plays a year ago as it ranked third in the NFL in average depth of target allowed. It forced QBs to dink and dunk their way down the field, which doesn’t lead to big passing yardages, and L.A. generally prefers to not throw the ball anyway. Just four teams attempted fewer passes last season than the Chargers, and with L.A. running for 156.5 yards versus Vegas last season, Herbert shouldn’t be overly active on Monday night. He’s forecasted to have 221.1 passing yards in a 4.5-star NFL prop bet. See more NFL props for Chargers vs. Raiders here

How to make Week 2 NFL prop picks

SportsLine’s Machine Learning Model has identified another star who sails past his total and has dozens of NFL props rated 4 stars or better. You need to see the Machine Learning Model analysis before making any Week 2 NFL prop bets.

Which NFL prop picks should you target for Week 2, and which quarterback has multiple 5-star rated picks? Visit SportsLine to see the latest NFL player props from SportsLine’s Machine Learning Model that uses cutting-edge artificial intelligence to make its projections.





Source link

Continue Reading

AI Research

In the News: Thomas Feeney on AI in Higher Education – Newsroom

Published

on


“I had an interesting experience over the summer teaching an AI ethics class. You know plagiarism would be an interesting question in an AI ethics class … They had permission to use AI for the first written assignment. And it was clear that many of them had just fed in the prompt, gotten back the paper and uploaded that. But rather than initiate a sort of disciplinary oppositional setting, I tried to show them, look, what you what you’ve produced is kind of generic … and this gave the students a chance to recognize that they weren’t there in their own work. This opened the floodgates,” Feeney said.

“I think the focus should be less on learning how to work with the interfaces we have right now and more on just graduate with a story about how you did something with AI that you couldn’t have done without it. And then, crucially, how you shared it with someone else,” he continued.



Source link

Continue Reading

Trending