Connect with us

AI Insights

LifeGPT: topology-agnostic generative pretrained transformer model for cellular automata

Published

on


Codes, data, and additional animations/figures are available at https://github.com/lamm-mit/LifeGPT.

Model architecture and hardware information

LifeGPT was constructed in Python using the “x-transformers” library65. The models in this study were trained with a workstation equipped with a high-end CUDA-compatible GPU (RTX A4000, NVidia, Santa Clara, CA, USA) for a total of 50 epochs on a 10,000-sample training set.

Hyperparameters

Hyperparameters were initially selected heuristically for optimal performance, as the GPU primarily used for training (RTX A4000, NVidia, Santa Clara, CA, USA) had 16 GB of VRAM. Unless otherwise stated, all instances of LifeGPT used the following set of hyperparameters during training, as described in Table 1. The batch size was initially set to 20 samples and was decreased to 5 samples for later versions of LifeGPT due to memory limitations encountered when using FCM (see ”Forgetful causal masking (FCM) implementation”).

Table 1 LifeGPT’s best-performing hyperparameters

Datasets

Data generation overview

To generate training sets, validation sets, and testing sets, the same basic strategy was used. First, IC game-states were generated stochastically as a 2D, 32 × 32 numpy arrays. Depending on the exact algorithm used, the generated IC game-states would collectively form either high-entropy or broad-entropy datasets. Next, a custom Life Python class was used to generate the corresponding NGS for every previously generated IC. Lastly, each IC and corresponding NGS were concatenated within a string. Every generated pair was subsequently stored within a dataframe from future retrieval.

Data topology

Transformer models are architected to process data as 1D arrays. Therefore, to teach LifeGPT the rules of a 2D CA algorithm, such as Life, the 2D data from each time slice of the game had to be flattened into a 1D array. In this way, LifeGPT functioned similar to a vision transformer, in which 2D data is flattened into a 1D array within which each entry is a tokenizable image patch26. However, due to the low resolution of the 32 × 32 toroidal grid on which Life was simulated to generate our training, we were able to encode every pixel of each time-slice of the game in a 1D array (as opposed to grouping pixels into patches).

Instruction Tuning

In order to encode the time-progression of the game into the training set, the initial-state and next-state 1D arrays were placed within a prompt string, which was subsequently tokenized to form a vector. Specifically, both 1D arrays were converted to strings and placed within a larger string containing start and end tokens (@ and $, respectively), a task statement, and bracket delimitors (e.g., “@PredictNextState [NEXT_STATE]$”).

Tokenization

We employed a byte-level tokenizer that operates on UTF-8 encoded text. UTF-8 is a variable-width character encoding capable of representing every character in the Unicode standard, which allows the tokenizer to process a wide range of scripts, symbols, and special characters uniformly. By converting the text into its byte-level representation, our approach ensures consistent tokenization across different languages and handles out-of-vocabulary words and non-standard text, such as emojis or code, effectively. This method allows for robust and flexible processing of diverse textual data. Tokenization resulted in a vector suitable as input to the embedding layer of the transformer model.

Training set generation

High-entropy IC set generation

High entropy IC game-states were generated by effectively flipping a coin 1024 times to designate the states (0 or 1) on a 32 × 32 grid. When considering the configuration space of a binary 2D array M {0, 1}32×32, the following formula may be used to describe its Shannon entropy66 (informational entropy):

$$H(M)=-\sum _{x\in \{0,1\}}{p}_{x}{\log }_{2}{p}_{x}$$

(1)

(This is also known as the binary entropy function67) where, px is the probability of finding the value x in the 32 × 32 array M. px is defined as:

$${p}_{x}=\frac{1}{3{2}^{2}}\mathop{\sum }\limits_{i=1}^{32}\mathop{\sum }\limits_{j=1}^{32}{\delta }_{{M}_{ij},x}$$

(2)

where, Mij is an element of M in the ith row and jth column, and \({\delta }_{{M}_{ij},x}\) is the Kronecker delta function, which is equal to 1 if Mij = x and 0 otherwise.

Thus, for a “50–50 coin toss” scenario (\({p}_{0}={p}_{1}=\frac{1}{2}\)), H(M) is at a maximum and is equal to 1 Sh. Moreover, since binary data necessitates the condition p0 + p1 = 1, only one probability value is needed to fully describe the entropy of a given array A. We therefore denote the ordering of a given IC by referring to a single order parameter, η, where η = p1 is always true. When considering the order parameter of a set of ICs, it is important to note that, because IC generation is always a stochastic process, the exact η of any given IC in the set cannot be predicted with certainty. For this reason, we characterize IC sets with the symbol 〈η〉, denoting the expected order parameter.

To generate high-entropy ICs, a binary array was constructed by checking random.random() < 0.5 == True (using the “random” module in Python—see https://python.readthedocs.io/en/latest/library/random.html) to decide each element. If the statement returned true, then the element would be defined as 1, and otherwise, 0. This method resulted in a training set with a binomial, experimentally measured η distribution (Fig. 5A).

Broad-entropy IC set generation

To create a broad-entropy IC set, first, a vector was created representing a set of order parameters ranging from 0 to 1. The length of this vector was set to the desired number of samples in the dataset (10,000 for training, 1000 for validation). This set of order parameters may be thought of as containing different expected probabilities for finding a 1 in an IC.

Then, the same procedure as with the high-entropy IC set was followed, with two exceptions: (1) instead of random.random() < 0.5 == True determining the value of each element in each IC array, random.random() < η == True was the determining equality, and (2) each IC was generated using a unique η from the aforementioned vector (see “Training set generation”). This strategy ensured that the IC set represented a broad range of ordering, from all 0s, to 50–50 0 s and 1 s, to all 1s (Fig. 5B).

Next-game-state generation

NGSs were calculated from IC arrays by applying Life rules assuming a toroidal grid (see the update_grid() function here: game.py).

Reshaping data

To make the handling of training set data easier, the final stage of the training set generator involves reshaping the data into a list of sub-lists, in which each entry in the list contains a sub-list corresponding to a specific IC. Within each unique sub-list, two strings are stored, one corresponding to a flattened IC, and one corresponding to a flattened NGS (see the generate_sets() function here: game.py).

Validation set generation

Validation sets were generated using the same methods in “Training set generation,” as the random.random() function ensures sufficiently random IC generation, ensuring training and validation sets remained entirely independent. Combined with the incredibly large space of possible 32 × 32 binary arrays (232 × 32 ≈ 1.80 × 10308 unique possibilities), this made the likelihood of even a single sample being identical between a 10,000-sample training set and a 1000-sample validation set negligible (see “Learning abilities”). This, in turn, ensured that over the course of model training, training loss and validation loss remained independent of one another.

Testing set generation

A 10-sample testing set was constructed to validate the performance of models during and after training, in a manner other than by inspecting the validation and training losses. Five samples in the testing set were generated stochastically in the same manner as in “Training set generation,” and 5 samples were manually defined to match known periodic and complex patterns found in Life (Fig. 3). NGSs were recursively generated for a total of 10 states (including the IC) per sample, for all 10 samples in the testing set.

Dataset generation for differently sized grids

For datasets (training, validation, testing) for LifeGPT-MultiGrid (see “Learning life on differently sized grids”), the only difference in the procedure was to specify different grid sizes (WG {2, 4, 8, 16}) during IC generation, and to introduce a padding character (“p”) which was append as many times as needed to ends of each sub-list for those which had grid sizes smaller than the largest specified grid size, such that all sub-lists were the same length.

Forgetful causal masking (FCM) implementation

FCM was implemented using the “x-transformers” library65. FCM was built into this library as part of the AutoregressiveWrapper class by default. FCM was enabled by setting mask_prob to 0.15, which was empirically shown to be effective by Liu et al.68.

FCM involves randomly masking a predetermined percentage of past-tokens during the learning process, in addition to standard causal attention masking. The authors68 argue that this method prevents over-attending to more recent tokens in a given sequence, encouraging attention to tokens in the “distant past.” We implemented FCM into our model, which increased the rate at which model accuracy improved with each epoch. Furthermore, FCM enabled our model to achieve 100% accuracy on our testing set with a sampling temperature of 1.0 in less than 50 epochs, which was previously unattainable when training with a broad-entropy dataset.

Implementing FCM increased the GPU RAM requirements of our LifeGPT, necessitating a decrease in batch size from 20 to 5 samples.

Model development

Training was initially conducted with high-entropy data. Due to the (pseudo)random nature of our training set generation script (see “Training set generation”), and the high number of samples in the training set (10,000), there was some diversity of training data entropy despite use of a static order parameter of (η = 0.5) (Fig. 5A). Nevertheless, observed model accuracy issues for low-entropy ICs prompted the use of broad-entropy datasets (Fig. 5B), which resulted in for improved performance. Later, LifeGPT-MultiGrid (Learning life on differently sized grids) was developed using a modified dataset to show that the LifeGPT framework allowed for simultaneous learning of multiple grid sizes.

Accuracy benchmarking

The testing dataset consisted of 10 flattened 32 × 32 binary arrays, representing initial states in Life, and their resulting iterations in accordance with Life state-transition rules on a toroidal (periodic) grid, numbering one through ten. Depending on the type of model being trained (the number of desired time-step jump predictions), different columns in the testing dataset would be selected as the ground truth. Accuracy at each checkpoint (every 2 epochs, starting with epoch 2) was determined by inputting the task statement (e.g., @PredictNextState) into a tokenizer, and subsequently using the tokenized data as the prompt for the autoregressive model. Since all of LifeGPT’s training was conducted on data corresponding to a 32 × 32 grid, LifeGPT was programmed to output the exact number of tokens necessary to fully describe the NGS. After LifeGPT was finished generating the output data, this data was compared to the ground truth (the flattened NGS in accordance with Life’s rules), and an accuracy score was computed using the following function:

$$A=\frac{1}{N}\mathop{\sum }\limits_{i=1}^{N}{\delta }_{{y}_{i}{\hat{y}}_{i}}$$

(3)

where A is the Accuracy of the model, N is the total number of cell predictions across the testing dataset (N = 32 × 32 × 10 = 10,240 cells for a dataset with ten pairs of 32 × 32 grid game examples), yi is the ground truth value, \({\hat{y}}_{i}\) is the predicted value, and δ is the Kronecker delta function which equals 1 if \({y}_{i}={\hat{y}}_{i}\) and 0 otherwise. An accuracy score was computed once every 2 epochs, for each model sampling temperature in the set 0, 0.25, 0.5, 0.75, 1, starting with epoch 2.

Training set entropy effects experimental procedure

The goal of this experiment was to determine was effect, if any, that the ordering of the ICs making up the training data for LifeGPT would have on accuracy (A), when fed ICs generated with varying expected order parameters (〈ηIC). We used two versions of LifeGPT; one was trained on high-entropy training data, and the other on broad-entropy training data. Next, a broad-entropy testing set (comprised of 110 samples, each with a (〈ηIC) value ranging linearly from 0 to 1) was generated in the same manner as the broad-entropy training set. The stochasticity of the IC generation process ensured both broad entropy sets remained independent. Finally, both models were benchmarked on each sample in a manner similar to the method in “Accuracy benchmarking and sampling temperature effects,” the only difference being that A was calculated for each sample in the testing set, as opposed to an average of all samples. Finally, A versus (〈ηIC) was plotted for both models (see Fig. 4).

Autoregressive loop implementation

The autoregressive loop is simply an implementation of LifeGPT where the model is placed inside a loop, where a portion of its output, corresponding to the NGS, is converted into an input tensor and is fed back into LifeGPT, for a desired number of iterations. As such, the NGS outputs of the previous loop iteration serves as the IC in the next loop iteration. In this way, the autoregressive loop is able to “run” Life in a similar recursive manner as the original algorithm. We ran the autoregressive loop using two versions of LifeGPT trained on the broad-entropy training set: one which stopped training at epoch 16 (chosen due to this version being the earliest instance of A = 1.0) for sampling temperature = 1), and one that continued training until epoch 50, across sampling temperatures 0, 0.25 0.5, 0.75, and 1. We compared the NGSs outputted from our autoregressive loop method with the ground truth NGSs, generated with the Life algorithm, and created animations for all model-sampling temperature combinations, showing the progression of the ground truth Life system, the autoregressive loop-generated NGSs, and the discrepancy between the two.

We also ran the autoregressive loop (and the Life algorithm) for 249 iterations (resulting in 250 game states, including the ICs), using only the epoch 50, sampling temperature = 0 version of LifeGPT due to time and compute constraints, for all 10 samples in the testing set. For each game state, we compared LifeGPT’s predictions to the GT Life algorithm’s output using the metric “Error Rate,” defined as:

$${\rm{Error}}\,{\rm{Rate}}=1-\frac{1}{G}\mathop{\sum }\limits_{i=1}^{G}{\delta }_{{y}_{i}{\hat{y}}_{i}}$$

(4)

where ErrorRate is the fraction of incorrect cells the model, G is the total number of cells comprising each game state (N = 32 × 32 = 1024 cells), yi is the ground truth value, \({\hat{y}}_{i}\) is the predicted value, and δ is the Kronecker delta function.

LifeGPT-multigrid experimental procedure

Accuracy characterization was performance in the same manner as described in “Accuracy benchmarking and sampling temperature effects,” aside from the use of a different testing dataset. A testing set of 100 samples (25 samples per WG for WG {2, 4, 8, 16}) was created (utilizing broad entropy IC generation). Inference was performed for each sample, and average accuracies were calculated for each 25-sample group in accordance with equation (3).

Use of generative AI

Some Python scripts used for data generation, model training, data processing, and figure generation were written with the assistance of GPT-3.5, GPT-4, and GPT-4o from OpenAI. All scripts generated/edited in this manner were carefully reviewed, validated, and manually corrected, in the case of errors, by an author prior to implementation in our work.



Source link

AI Insights

Contributor: How do we prepare college students for the AI world?

Published

on


The rise of artificial intelligence is threatening the foundations of education — how we teach, how we assess and even how students learn to think. Cheating has become effortless. Attention spans are dissolving. And the future job landscape is so uncertain that we don’t know what careers to prepare students for. A recent NBC News poll of nearly 20,000 Americans shows the public is evenly divided, with about half believing we should integrate AI into education and half believing we should ban it.

So, as we welcome the Class of 2029 to our campuses, what should colleges do?

Although some urge higher education to prioritize STEM fields and AI-related job skills, a surprising number of technology leaders are advising the opposite.

“I no longer think you should learn to code,” says investor and former Facebook executive Chamath Palihapitiya. “The engineer’s role will be supervisory, at best, within 18 months.”

Roman Vorel, chief information officer of Honeywell, argues that “the future belongs to leaders with high EQs — those with empathy, self-awareness and the ability to make genuine human connections — because AI will democratize IQ.”

Daniel Kokotajlo, co-author of “AI 2027,” which projects a set of scenarios leading to an “enormous” impact of superhuman AI over the next decade, puts it bluntly: “Economic productivity is just no longer the name of the game when it comes to raising kids. What still matters is that my kids are good people — and that they have wisdom and virtue.”

In other words, as machines gain in speed and capability, the most valuable human traits may not be technical but moral and interpersonal. Technology journalist Steven Levy spoke even more plainly in a recent commencement address at Temple University: “You have something that no computer can ever have. It’s a superpower, and every one of you has it in abundance: your humanity.”

It might seem like a tall order to cultivate attention, empathy, judgment and character — qualities that are hard to measure and even harder to mass-produce. Fortunately, we have an answer, one that turns out to be surprisingly ancient: liberal education. Small liberal arts colleges may enroll only a modest 4% of our undergraduates, but they are, historically and today, our nation’s seed bank for deep and broad humanistic education.

Liberal education is structured around serious engagement with texts, works of art and scientific discoveries that have shaped our understanding of truth, justice, beauty and the nature of the world. Students don’t just absorb information — they engage in dialogue and active inquiry, learning to grapple with foundational questions. What is the good life? What is the relationship between mathematics and reality? Can reason and faith coexist? Why do music and art move us?

These acts — reading, looking, listening, discussing — may sound modest, but they are powerful tools for developing the skills students most need. Wrestling with a challenging text over hours and days strengthens attention like physical exercise builds stamina. Conversation sharpens the ability to speak and listen with care, to weigh opposing views, to connect thought with feeling. This kind of education, by deepening our understanding of ourselves and our world, cultivates wisdom — and it’s remarkably resistant to the shortcuts AI offers.

If you spent a week at the college I lead, St. John’s College in Santa Fe, N.M., you might forget that AI even exists. It’s hard to fake a two-hour conversation about “Don Quixote” after reading only an AI summary, and it’s awkward to continue that conversation with your friends over a meal in the dining hall. Should you succumb to the temptations of AI in writing a paper, you’re likely to find yourself floundering in the follow-up discussion with faculty.

Liberal arts colleges have one other indispensable tool for deepening learning and human connection: culture. Most are small, tight-knit communities where students and faculty know one another and ideas are exchanged face to face. Students don’t choose these schools by default; they opt in, often for their distinctiveness. The pull of technology is less strong at these colleges, because they create intense, sustaining, unmediated experiences of communal thinking. This strong culture might be seen as a kind of technology itself — one designed not to dissipate minds and hearts, but to support and deepen them.

Paradoxically, four years largely removed from the influence of technology is one of the best ways of preparing for life and work in an increasingly technologized world.

Carla Echevarria, a 1996 alumna of St. John’s and now a senior manager of user experience at Google DeepMind, admits that she would “struggle with Schrödinger in senior lab and then bang my head against Hegel for a couple of hours and then weep in the library while listening to ‘Tristan und Isolde.’ That brings an intellectual fearlessness.

“When I started working in AI, I didn’t really know anything about AI,” she adds. “I prepared for my interview by reading for a couple of weeks. That fearlessness is the greatest gift of the education.” Many alums echo this belief regardless of the fields they go into.

As we head into this school year and into a future shaped by powerful and unpredictable machines, the best preparation may not be a new invention, but an old discipline. We don’t need a thousand new small colleges, but we need a thousand of our colleges and universities, large and small, to embrace an overdue renaissance of these deeply humanizing educational practices. We don’t need to outpace AI — we need to educate people who can think clearly, act wisely and live well with others.

J. Walter Sterling is the president of St. John’s College, with campuses in Annapolis, Md., and Santa Fe, N.M.



Source link

Continue Reading

AI Insights

Artificial Intelligence (AI): powering law firm profitability

Published

on


Profitability remains the ultimate benchmark of growth for law firms, yet sustaining it is no simple task. The most effective route to stronger profits lies in refining operational efficiency, delivering more, with greater accuracy, using the resources already available.

Forward-looking firms recognise that real change requires new ways of working. And the most accessible, cost-effective enabler of such change today is technology, specifically, Artificial Intelligence (AI).

AI as a catalyst for transformation

The rise of AI is ushering in a new era of productivity. By reshaping how legal professionals research, analyse, draft, and communicate, AI is transforming client service and redefining what lawyers can achieve. From predictive analytics to automated drafting, these tools accelerate legal processes and free up valuable lawyer time for strategic, high-value client interactions.

While cost is a natural concern, firms must consider the long-term return on investment. A well-planned AI strategy, underpinned by a cost-benefit analysis, will demonstrate significant efficiency gains and enhanced profitability.

Crucially, today’s advancements build on the foundations of integrated practice management systems. These platforms, housing vast stores of client and matter data, have paved the way for AI to thrive, enabling efficiencies unimaginable only a decade ago.

Optimising human potential

Staffing is one of the greatest expenses for any law firm. Compared with the ongoing costs of recruitment, training, and management, AI can prove to be a highly economical alternative. By automating low-value, routine work, such as research, data extraction, or document review, lawyers can concentrate on higher-value, client-facing activities.

This not only maximises return on investment but also provides opportunities for staff to upskill, focus on complex problem-solving, and deliver more meaningful contributions to clients. Importantly, AI should not be seen as replacing human expertise but as a complement that amplifies it.

Driving efficiency through data

Lawyers face countless time-consuming, repetitive tasks, yet the key to efficiency lies within the systems they use. Firms with modern, centralised practice management platforms are already positioned to take advantage of AI. When quality data is connected, AI tools can unlock unprecedented productivity, reduce errors, and streamline workflows.

Conversely, firms reliant on disconnected systems risk undermining AI’s effectiveness, missing out on the efficiencies and profit gains available to competitors.

Enabling sustainable innovation

Deploying AI independently can be expensive and risky without the right expertise. Partnerships with specialist legal technology providers, such as SOS Legal, offer law firms integrated AI solutions without prohibitive upfront investment. These tools drive operational efficiency, reduce overheads, and improve client service in a sustainable way.

Innovative applications such as AI-powered chatbots and virtual assistants are also changing the way firms engage with potential clients. By providing instant responses, guiding enquiries, and capturing data, these tools elevate lead generation while giving lawyers deeper insights into client needs.

The path forward

AI represents not a threat, but an opportunity. By embracing AI-driven technology, law firms can unlock new levels of efficiency, sharpen client service, and ultimately, enhance profitability. The firms that thrive will be those that use AI to complement human expertise, transforming their people into higher-value advisers while letting technology take care of the rest.

About SOS

SOS Legal’s next-generation software solution, SOS Innovate, is purpose-built to drive enterprise law firm growth and deliver continuous innovation.

With its cutting-edge, flexible architecture and modern design, SOS Innovate provides a scalable platform for ongoing technological advancement—empowering law firms to thrive in an ever-evolving legal landscape.

Designed specifically for enterprise law firms, SOS Innovate provides a powerful platform for the adoption of Legal AI tailored to the realities of modern legal practice. Streamlining research and accelerating drafting, Innovate is built to drive efficiency. For more information book a demo of SOS Innovate here.

 

This article was submitted to be published by SOS as part of their advertising agreement with Today’s Wills and Probate. The views expressed in this article are those of the submitter and not those of Today’s Wills and Probate

 





Source link

Continue Reading

AI Insights

Better Artificial Intelligence Stock: ASML vs. Taiwan Semiconductor

Published

on


ASML and Taiwan Semiconductor are foundational AI companies, but only one is delivering impressive results for shareholders.

The artificial intelligence (AI) boom has been fueled by large tech companies developing impressive AI models that can handle increasingly complex tasks. But a sometimes overlooked aspect of AI are the companies that manufacture complex processors that make those models possible.

Two such semiconductor manufacturing companies are ASML (ASML -2.78%) and Taiwan Semiconductor Manufacturing (TSM -3.05%), often referred to as TSMC. While both have their strengths, which one looks like the better stock right now? Here’s what’s happening with each, and which one is likely the better AI stock.

Image source: Getty Images.

ASML’s opportunities and risks

ASML has a unique angle in the processor manufacturing market through its extreme ultraviolet (EUV) lithography system that’s used to make AI processors. These machines are very complex and not easily replicated, which is why ASML is one of the few companies in the world with these machines. This means that any semiconductor manufacturing company that needs one of these machines has to come to ASML for it.

Despite this opportunity, it’s not all sunshine and rainbows for ASML’s business. The company is reeling from President Donald Trump’s tariffs, and management said recently that potential growth in 2026 will be affected by them. ASML CEO Christophe Fouquet said on the Q2 earnings call: “We continue to see increasing uncertainty driven by macroeconomic and geopolitical developments. Therefore, while we still prepare for growth in 2026, we cannot confirm it at this stage.”

That’s a shift from management’s previous stance that the company would grow significantly this year and next. The company also lowered its estimated sales for this year to about 32.5 billion euros, down from its previous estimate of up to 35 billion euros.

That uncertainty has caused ASML’s shares to plunge recently, dropping 13% over the past 12 months. And with investors still unsure how tariffs will impact the company over the next couple of years, they’re right to be a little wary.

TSMC’s advantages and challenges

Taiwan Semiconductor also has a unique position in the AI space. The company is the leading manufacturer of AI processors, with an estimated 90% of the advanced processor market. This means that when AI giants, including Nvidia, need AI processors made, Taiwan Semiconductor is often their first choice.

This demand continues to fuel growth for the company, and TSMC’s management estimates that AI sales will double this year. The company is already well on its way, with revenue rising by 38% to $30 billion in Q2. TSMC’s bottom line is impressive as well, with earnings rising 61% to $2.47 per American depository receipt (ADR).

And while ASML is experiencing some turbulence with its business, TSMC is still going strong. Taiwan Semiconductor CEO Wendell Huang said, “Moving into third quarter 2025, we expect our business to be supported by strong demand for our leading-edge process technologies.”

Continued demand for AI processors has resulted in TSMC’s share price climbing about 40% over the past 12 months, which is significantly better than the S&P 500‘s gains of 15% over the same time. While some investors are concerned about when the AI boom will be over, it’s certainly too early to call it now.

The verdict: Taiwan Semiconductor is the better AI stock

Taiwan Semiconductor is increasing sales and earnings at a healthy clip, has a corner on AI processor manufacturing, and continues to benefit from an expanding AI market. While ASML is a strong contender, the company’s recent tariff uncertainty and lowered sales expectations aren’t great news for investors.

ASML stock is also slightly more expensive than TSMC’s at the moment, with a price-to-earnings (P/E) ratio of about 28, compared to Taiwan Semiconductor’s 26. I think both companies could be good long-term AI investments, but for all the reasons above, I think Taiwan Semiconductor deserves the win in this matchup.

Chris Neiger has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends ASML, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.



Source link

Continue Reading

Trending