aistoriz.com
  • AI Trends & Innovations
    • The Travel Revolution of Our Era
  • Contact Us
  • Home News
  • Join Us
    • Registration
  • Member Login
    • Password Reset
    • Profile
  • Privacy Policy
  • Terms Of Service
  • Thank You
Connect with us
aistoriz.com aistoriz.com

aistoriz.com

Conditional generation of real antigen-specific T cell receptor sequences

  • AI Research
    • AI Transformation (AX) using artificial intelligence (AI) is spreading throughout the domestic finan..

    • Study shakes Silicon Valley: Researchers break AI

    • Password1: how scammers exploit variations of your logins | Money

    • And Sci Fi Thought AI Was Going To… Take Over? – mindmatters.ai

    • Measuring Machine Intelligence Using Turing Test 2.0

  • Funding & Business
    • Teck’s Founder Sought Merger With Anglo Before It Was Too Late

    • Switzerland’s Central Bank Learns to Live With a Strong Franc

    • Cross-Border Bank Consolidation Benefits Cyprus, Patsalides Says

    • Polish Stock Rally Seen Rolling On Despite Drones, Bank Tax Hike

    • Politics Drive Investment Divide in Southeast Asia’s Top Markets

  • Events & Conferences
    • Read Meta’s 2025 Sustainability Report

    • Scientific frontiers of agentic AI

    • A New Ranking Framework for Better Notification Quality on Instagram

    • Simplifying book discovery with ML-powered visual autocomplete suggestions

    • Revolutionizing warehouse automation with scientific simulation

  • AI Insights
    • 2 Popular AI Stocks to Sell Before They Fall 46% and 73%, According to Wall Street Analysts

    • 2 Artificial Intelligence (AI) Stocks to Buy Before They Soar to $5 Trillion, According to a Wall Street Expert

    • The battle for artificial intelligence (AI) talents triggered in Silicon Valley is spreading to Chin..

    • Goldman Sachs Warns An AI Slowdown Can Tank The Stock Market By 20%

    • A Sample Grant Proposal on “Artificial Intelligence for Rural Healthcare” – fundsforNGOs

  • Jobs & Careers
    • Databricks Invests in Naveen Rao’s New AI Hardware Startup

    • OpenAI Announces Grove, a Cohort for ‘Pre-Idea Individuals’ to Build in AI 

    • Uncommon Uses of Common Python Standard Library Functions

    • Tendulkar-Backed RRP Electronics Gets 100 Acres in Maharashtra for Semiconductor Fab

    • 5 Tips for Building Optimized Hugging Face Transformer Pipelines

  • Ethics & Policy
    • Vatican Hosts Historic “Grace for the World” Concert and AI Ethics Summit | Ukraine news

    • Pet Dog Joins Google’s Gemini AI Retro Photo Trend! Internet Can’t Get Enough | Viral Video | Viral

    • Morocco Signs Deal to Build National Responsible AI Platform

    • Santa Fe Ethics Board Discusses Revisions to City Ethics Code

    • Santa Fe Campaign Committee Discusses Ethics and Social Media Impact on Elections

  • Mergers & Acquisitions
    • FTAV’s further reading

    • Trump Intel deal designed to block sale of chipmaking unit, CFO says

    • Nuclear fusion developer raises almost $900mn in new funding

    • AI is opening up nature’s treasure chest

    • AI start-up Lovable receives funding offers at $4bn valuation

  • Podcasts & Talks
    • Intel Just Changed Computer Graphics Forever!

    • Not to go off on a tangent, but these math easter eggs in #GoogleSearch are pretty (a)cute.

    • What is the role of AI in enhancing digital defense? | Gen. David H. Petraeus & Google’s Kent Walker

    • Simplify classwork with AI Mode in Search. Just upload your syllabus PDF to get going.

    • Build Hour: Codex

AI Research

Conditional generation of real antigen-specific T cell receptor sequences

Published

6 days ago

on

September 8, 2025

By

Dhuvarakesh Karthikeyan


Sequence representation

We adopt the same seq2seq framework introduced in ref. 16, relaxing the direction of the pMHC→TCR source–target pairs to train on pMHC→TCR and TCR→pMHC, but evaluate on the former. To represent the TCR-pMHC trimeric complex, comprising three subinteractions (TCR-peptide, TCR-MHC and peptide-MHC) as a source–target sequence pair, we made a few simplifying assumptions that allowed for a more straightforward problem formulation. First, we assume a stable pMHC complex, reducing the problem space to a dimeric interaction between TCR and pMHC. Second, we focus on the variable amino acid residues at the binding interface. For the TCR, we use the CDR3β loop, a contiguous span of 8–20 amino acids that typically make the most contact with the peptide38. Similarly, for the pMHC, we use the whole peptide and the MHC pseudo-sequence, defined in ref. 39 as a reduced, non-contiguous, string containing the polymorphic amino acids within 4.0 Å of the peptide. We opt for a single-character amino-acid-level tokenization, primarily for its interpretability40. In addition to the 20 canonical amino acids, we use standard special tokens to encode semantic information pertaining to the structure of the sequences including the start of sequence [SOS], end of sequence [EOS], masking [MASK], padding [PAD] and a separator token [SEP] to delineate the boundary between the concatenated peptide and pseudo-sequence. For TCRT5, we additionally employ the use of sequence-type tokens [TCR] and [PMHC], retained from T5’s use of task prefixes20, to designate the translation direction:

TCRBART:

[SOS]EPITOPE[SEP]PSEUDOSEQUENCE[EOS]↔[SOS]CDR3BSEQ[EOS]

TCRT5:

[PMHC]EPITOPE[SEP]PSEUDOSEQUENCE[EOS]↔[TCR]CDR3BSEQ[EOS]

Of note, this formulation is extensible to other sequence representations of both TCR and pMHC by using the [SEP] token to delineate the α- and β-chain information for CDR3, multiple CDRs, and even full-chain sequence representations. Similarly, this approach can be used for the full MHC sequence as well.

Dataset construction

Core parallel corpus

Our parallel corpus comprised experimentally validated immunogenic TCR-pMHC pairs taken from publicly available databases (McPAS41, VDJdb27 and IEDB28). All data were collected before May 2023. Additionally, we used a large sample of partially labelled data derived from the MIRA42 dataset, which contained CDR3β and peptide sequences, but contained MHC information at the haplotype resolution instead of the actual presenting MHC allele. Therefore, the presenting MHC allele was inferred from the individual’s haplotype using MHCflurry 2.0’s43 top-ranked presentation score for the listed alleles. Of importance, these allele-imputed examples were not used in the evaluation. To aggregate the data spanning various sources, formats and nomenclature, we mapped the columns from each individual dataset to a common consensus schema and concatenated the data along the consensus columns. Missing values were reasonably imputed based on other information for that data instance. To keep only the cytotoxic (CD8+) T cells, we filtered the instances in which the cell type was provided or the HLA allele was of MHC class I. Once the data were aggregated and the values were imputed, we applied the following column-level standardization for each source of information:

  • CDR3β, epitope and MHC pseudo-sequence: all amino acid representations were normalized using the ‘tidytcells.aa.standardise’ function found in the tidytcells Python package44.

  • TR genes: the tidytcells package44 was once again used to standardize the nomenclature surrounding the TCR genes (for example, TRB-V and TRB-J).

  • HLA allele: HLA allele information was parsed and standardized to the HLA-[A,B,C]*XX:YY format using the ‘mhcgnomes’ package (https://github.com/pirl-unc/mhcgnomes), and only the parsed entities that identified as alleles were retained whereas those with serotype and class-level resolution were filtered. For a small number of cases in which mhcgnomes identified an allele group but was unable to find/parse protein-level information, we imputed the protein field by incrementing from ‘*01’ until a matching IMGT allele was found. Although this step has the potential of introducing differences between the imputed pseudo-sequence and the ground truth, we anticipate this source of noise to have a minor effect as the MHC pseudo-sequence is well conserved within the serotype. HLA alleles were imputed when necessary and then normalized using the mhcgnomes package to the standard HLA-[A,B,C]*XX:YY format.

Once aggregated, only entries derived from human studies with MHC class I peptides were retained. Additionally, entries with the minimal information of HLA, peptide and CDR3β were retained. No other data filtration was performed for the training and validation splits.

Training/validation split

To assess the feasibility of having the models sample antigen-specific sequences for unseen epitopes, we held out a validation set of the top-20 most-target-rich pMHCs. We trained on the remaining data, further removing the occurrences of the held-out, epitope-bound alternate MHCs to ensure a clean validation split (Fig. 1c). We retained training sequences with a low edit distance to the validation pMHCs to better understand their influence on performance. The degree to which these sequences exhibit training set similarity is reflected in Extended Data Table 1. The parallel corpus was subsequently de-duplicated to remove near duplicates (peptides with the same allele and a ≥6-mer overlap), which we found to marginally help the overall performance, in accordance with ref. 45. This resulted in a final dataset split of ~330k training sequence pairs (N = 6,989 pMHCs) and 68k validation sequence pairs (N = 20 pMHCs). A key limitation of our validation dataset is its bias towards mainly viral epitopes and a very narrow HLA distribution towards well-studied alleles.

Unlabelled ‘monolingual’ data

We hypothesized that pretraining the encoder–decoder model using self-supervised methods on pMHC and TCR sequences could help boost the translation performance of the model by learning better representations for source and target sequences, as that in ref. 46, which crucially has been shown to improve performance in the low-resource setting21. For the unlabelled pMHC sequences, we used the positive MHC ligand binding assay data from IEDB (N ≈ 740k)28. For the TCR sequences, we used around (N ≈ 14M) sequences from TCRdb47, out of which around 7M CDR3β sequences were unique. For this dataset, we chose to retain duplicate CDR3β sequences as the TCRdb was amassed over multiple studies and populations; therefore, we felt that the inclusion of duplicate CDR3β sequences was reflective of convergent evolution in the true unconditional TCR distribution.

Benchmark ‘test’ data

To fairly compare TCRT5 against external models ER-TRANSFORMER17 and GRATCR18, we looked for data that would not advantage any one model over the other. This meant that we needed to find data that were not from any training set or validation set, which would have introduced leakage via model selection. Since GRATCR was fine-tuned exclusively on MIRA data, filtering for our training and validation sets would cover the GRATCR model. However, since we were not able to find the training set for ER-TRANSFORMER, we adopted a slightly more stringent data inclusion policy. To account for both ours and ER-TRANSFORMER’s dataset, we aimed to find paired TCR-pMHC data from recent studies (2023 onwards) and filter for epitopes that were at least five amino acid edits away from anything in our training set. For its distributed use and well-characterized performance, the IMMREP2023 TCR specificity competition10 was used along with recent exports from VDJdb and IEDB, which were accessed on 25 March 25 and 1 April 2025, respectively. To ensure that quality examples were taken from VDJdb, entries with a confidence score of ≥2 were chosen. After applying our filtering criteria, we were left with four pMHCs from the IMMREP2023 dataset, four pMHCs from IEDB and eight pMHCs from VDJdb. After manually examining the 16 pMHCs and validating their assay conditions, two pMHCs from VDJdb that shared the same peptide ‘RPIIRPATL’ were dropped due to their inclusion in a 2021 study. The final test set consisted of 14 epitopes with the ‘RVRAYTYSK’ epitope, which contained 895 unique CDR3β sequences, being removed from the benchmark set to have n = 13 pMHCs for the benchmark and the ‘RVRAYTYSK’ epitope as an in silico simulation. The degree to which these sequences exhibit training set similarity is reflected in Extended Data Table 2.

Supplementary Note A.11 provides more information.

Model training

Pretraining

TCRBART was pretrained using masked amino acid modelling (BERT style48), whereas TCRT5 utilized masked span reconstruction, learning to fill in randomly dropped spans with lengths between 1 and 3. Of importance, neither model was trained on complete sequence reconstruction to reduce the possibility of memorization during pretraining. Both models were trained on unlabelled CDR3β and peptide-pseudo-sequences, simultaneously pretraining the encoder and decoder, inspired by the MASS/XLM approach49,50. Unlike MASS/XLM, we omitted per-token learned language embeddings, allowing TCRBART to learn from the size differences between CDR3β and pMHC sequences and TCRT5 to use the [TCR] and [PMHC] starting tokens. To address the imbalance in sequence types, we upsampled sequences for a 70/30 TCR/pMHC split.

Direct training/fine-tuning

For the parallel data, we used the same three training protocols (baseline, bidirectional and multitask) for direct training from random initialization as well as fine-tuning from a pretrained model. This was done by extending the standard categorical cross-entropy loss function (equation (1)), favoured in seq2seq tasks for its desired effect of maximizing the conditional likelihoods over target sequences51,52. For the baseline training, we used the canonical form of the cross-entropy loss, as shown below:

$$\begin{array}{rcl}{\mathcal{L}}&=&{\rm{CE}}({\bf{y}},\hat{{\bf{y}}})=-\mathop{\sum }\limits_{i=1}^{n}{{\bf{y}}}_{i}\log [{\hat{{\bf{y}}}}_{i}]\\ &=&-\mathop{\sum }\limits_{i=1}^{n}\mathop{\sum }\limits_{j-1}^{k}{y}_{ij}\log [{p}_{\theta }({y}_{ij}| {\bf{x}})]\end{array}.$$

(1)

The bidirectional and multitask models were trained using multiterm objectives, forming a linear combination of individual loss terms corresponding to the cross-entropy loss of each task/direction.

$${{\mathcal{L}}}_{\rm{bidxn}}={{\mathcal{L}}}_{pmhc\to tcr}+{{\mathcal{L}}}_{tcr\to pmhc}$$

(2)

$${{\mathcal{L}}}_{\rm{multi}}={{\mathcal{L}}}_{mlm}+{{\mathcal{L}}}_{pmhc\to tcr}+{{\mathcal{L}}}_{tcr\to pmhc}$$

(3)

To mitigate the effects of model forgetting with stacking single-task training epochs, we shuffled the tasks across the epoch using a simple batch processing algorithm (Algorithm 1). After the batch was sampled, it was rearranged into one of four seq2seq mapping possibilities and trained on target reconstruction with the standard cross-entropy loss, which was used for backpropagation. In this way, we could ensure that the model was simultaneously learning multiple tasks during training. For the bidirectional model, this was straightforward as we could swap the input and output tensors during training to get the individual loss contributions of \({{\mathcal{L}}}_{\rm{pmhc\to tcr}}\,{\text{and}}\,{{\mathcal{L}}}_{tcr\to pmhc}\) (equation (2)). For the multitask model, the mapping possibilities are (1) pMHC→TCR, (2) TCR→pMHC, (3) masked/corrupted pMHC*→pMHC and (4) masked/corrupted TCR*→TCR, which combine to form \({{\mathcal{L}}}_{\rm{multi}}\) (equation (3)). These tasks and sequence mappings as seen by TCRBART and TCRT5 are summarized in Fig. 2b.

Algorithm 1

Multitask training step.

Batched input: source pMHCs, X; target TCRs, Y

Sample a ≈ Bernoulli(0.5)

if a > 0.5 then

Swap X and Y

Compute attention masks

end if

Sample b ≈ Bernoulli(0.5)

if b > 0.5 then

Set X = X* and Y = X

Compute attention masks

end if

do Predict \(\hat{{\bf{Y}}}=\phi ({\bf{X}})\) and gradient updates on CE(y, \(\hat{{\bf{y}}}\))

For the purposes of comparison between models originating from different training schemes, each of the models was trained for 20 epochs, from which the checkpoint with the highest average overlap to the known TCR reference set (F1 score) was chosen. We chose this approach to characterize the models’ real-world potential under optimal conditions, as opposed to training for a fixed number of steps or even a fixed number of steps per task (Supplementary Note A.6).

Evaluation

To evaluate antigen specificity, we build our framework around sampling exact CDR3β sequences from published experimental data on well-characterized validation epitopes not seen during training. This approach has an interpretable bias compared with black-box error profiles, at the cost of potentially under-representing actual performance. We calculate sequence-similarity-based metrics beyond exact overlap to create a more robust evaluation framework, and characterize their concordances for future use on epitopes with fewer known cognate sequences. Broadly, our metrics can be summarized as evaluating the accuracy of the returned sequences, their diversity or some combination of the two. They are summarized in brief below:

Accuracy metrics

  • Char-BLEU: following BLEU-4 (ref. 53), the character-level BLEU calculates the weighted n-gram precision against the k = 20 closest reference sequences to abate the unintended penalization of accurate predictions under a large reference set. We use NLTK’s ‘sentence_bleu’ function to calculate a single translation’s BLEU score and the ‘corpus_bleu’ function to compute the BLEU score over an entire dataset.

  • Native sequence recovery: we compute the index-matched sequence overlap with the closest known binder of the same sequence length, when available. This is the same as the length-normalized Hamming distance. The Levenshtein distance normalized to the length of closest reference was used for cases in which a size-matched reference did not exist.

  • mAP: borrowed from information retrieval, mAP measures the average precision across the ranked model predictions. Here we rank the generations by model log-likelihood scores and take the average of the precisions at the top-1, top-2, top-3… top-k ranked outputs. Then, we take the mean over the various pMHCs’ average precision values to get the mAP. This metric gauges the accuracy of the model as well as the calibration of its sequence likelihoods.

  • Biological likelihood: to assess the plausibility of model outputs independent of antigen specificity or labelled data, we compute the generation probability of predictions using OLGA, a domain-specific generative model that infers CDR3β sequence likelihood26.

Diversity metrics

  • Total unique sequences: as a measure of global diversity, we compute the number of total unique generations across the top-20 validation pMHCs as a diversity metric that captures model degeneracy and input specificity. This metric is a function sampling depth and is dependent on the relatedness or model-perceived relatedness of the input epitopes in a dataset.

  • Jaccard similarity/dissimilarity index: the Jaccard index or the Jaccard similarity score is used to measure the similarity of two sets and is calculated as the size of the intersection divided by the union of the two sets. Since the Jaccard index is inversely proportional to diversity, one minus the Jaccard index is used to represent diversity between two sets.

  • Positional Δentropy: to quantify the change in diversity between the models’ outputs and the reference distribution per CDR3β position, we report H(qi) – H(pi) over the Kullback–Leibler divergence to get a signed change in entropy between the amino acid usage distribution of reference distribution q and sample distribution p at position i.

Both

  • Precision@K: borrowed from information retrieval, this metric is calculated by sampling K sequences from the model, with the key distinction that we do not include rank. Here we count the true positives as the exact sequence overlap to the reference target sequences and false positives are chosen, although restrictively, as sequences that do not occur in the reference set. These quantities are combined to compute precision as follows:

    $$\,\text{Precision}=\frac{\text{True Positives (TP)}}{\text{True Positives (TP)}+\text{False Positives (FP)}\,}.$$

  • Recall@K: also taken from information retrieval, this metric uses exact sequence overlap to measure the model’s ability to sample the breadth of reference sequences, which we calculate to be the minimum between K and the number of total reference sequences to ensure this metric ranges from 0 to 1:

    $$\,\text{Recall}=\frac{\text{True Positives (TP)}}{\min(K,\text{Total Reference Sequences})}.$$

  • F1@K: the F1 score is computed as the harmonic mean of precision and recall, useful for its ability to capture a balanced picture between precision and recall:

    $$\,\text{F1}=2\times \frac{\text{Precision}\times \text{Recall}}{\text{Precision}+\text{Recall}}.$$

  • k-mer spectrum shift: as used in the DNA sequence design space54, the k-mer spectrum shift measures the Jensen–Shannon (JS) divergence between the k-mer usage frequency distributions of two sets of sequences across different values of k. Here we compare the JS divergence between the distribution of k-mers derived from a pMHC’s model generations and its reference set of sequences.

TCRT5 data ablation

To evaluate the impact of specific training decisions, we conducted an ablation study by removing key complexities of our training and data pipelines and measuring their effects on model performance. We started with our chosen model, TCRT5, fine-tuned on the single-task TCR generation with semisynthetic MIRA42 data. Next, we retrained the model without the MIRA data for an equivalent number of steps to assess its contribution. Finally, we removed pretraining altogether, training a model on the reduced dataset from random initialization.

To avoid over-representing the performance of the model trained on MIRA data on similar validation examples, we specifically removed three pMHCs that were a single-edit distance from a MIRA example with a greater than 5% overlap in their cognate CDR3β sequences (LLLDRLNQL, TTDPSFLGRY and YLQPRTFLL) from the validation set. For all the models, we used the same checkpoint heuristic, selecting the model with the highest F1 score.

In silico benchmark

GRATCR

For running GRATCR on the test set peptides, we followed the instructions provided by the GRATCR team18 (https://github.com/zhzhou23/GRATCR). We ran the beam search decoding as provided. Since conditional likelihoods were not output by their beam implementation, the sampled sequence index was used as the translation rank. The script to sample the fine-tuned GRATCR was used as follows:

python GRA.py –data_path=”./data/benchmark_peptides.csv”

–tcr_vocab_path=”./Data/vocab/total-beta.csv”

–pep_vocab_path=”./Data/vocab/total-epitope.csv”

–model_path=”./model/gra.pth” –bert_path=”./model/bert_pretrain.pth”

–gpt_path=”./model/gpt_pretrain.pth” –mode=”generate”

–result_path=”./gratcr_benchmark_results.csv” –batch_size=1 –beam=1000

ER-TRANSFORMER

ER-TRANSFORMER was run using the unique amino acid model for a more direct comparison to TCRT5. We utilize the seq_generate method as described in their codebase with the default parameters shown in https://github.com/TencentAILabHealthcare/ER-BERT/ under Code/evaluate_seq2seq_MIRA.py as used by the ER-BERT team17. The translation rank was computed in the same manner as for TCRT5 using the Hugging Face infrastructure around model.generate. The code for sampling the ER-TRANSFORMER is shown below:

def seq_generate(input_seq, max_length, input_tokenizer, target_tokenizer, beams, k=1000):

input_tokenized = input_tokenizer(” “.join(input_seq),

padding=”max_length”,

max_length=max_length,

truncation=True,

return_tensors=”pt”)

input_ids = input_tokenized.input_ids.to(“cpu”)

attention_mask = input_tokenized.attention_mask.to(“cpu”)

outputs = model.generate(input_ids,

attention_mask=attention_mask,

num_beams=beams,

num_return_sequences=k)

output_str = target_tokenizer.batch_decode(outputs, skip_special_tokens=True)

output_str_nospace = [s.replace(” “, “”) for s in output_str]

output_str_nospace = [s for s in output_str_nospace if s != “”]

return output_str_nospace

Additionally, we observed that the ER-TRANSFORMER performance was greatly improved using a post hoc editing step to the translations by simply adding a leading cysteine and ending phenylalanine wherever missing. Although this decreased the number of unique sequences, indicating that ER-TRANSFORMER was sampling both sequences with and without the required C and F, we felt that the large increase in accuracy warranted its inclusion for a fair benchmark and annotate this amended model ER-TRANSFORMER+, which we hold as a fairer comparison of the methods.

Modified F1 scores

In the sparse setting, evaluating the model performance using exact sequence recovery is zero inflated when this may not be the case if sufficient known binders were available. To help alleviate this, we took a principled approach of calling sequences true positives. The first was using sequence recovery values of >90% to a known reference CDR3β. Second, we used the GIANA 4.1 (ref. 29) clustering algorithm to cluster the generated samples with known reference sequences. Purported positives were the generated samples that clustered with a reference sequence. GIANA was run using only CDR3β information and all of the default settings using the following command:

python GIANA4.1.py -f cdr3b_input_file_path -v False

In vitro validation

To further evaluate the ability of TCRT5 to generate epitope-specific CDR3β sequences for sparsely validated epitopes, we attempted to experimentally characterize a list of predicted CDR3β sequences for leukaemia-associated antigen, the HLA-A*02:01 presented WT1 (VLDFAPPGA)31 to be grafted on a well-characterized TCR-T32 using the sequence identified in ref. 55. From the list of generated CDR3β sequences, we selected 40 for in vitro validation. We chose 20 sequences of the same length as the original CDR3β sequence (13 AA) by oversampling TCRT5 and choosing the first 20 sequences of length 13. Additionally, we chose 20 sequences of variable CDR3β lengths by sampling 100 sequences from TCRT5 and taking every fifth sequence starting from the first, ranging from 15–17 AA long.

Retroviral transduction

Predicted CDR3β sequences (Extended Data Table 3) were synthesized as gBlocks (IDT, custom) and cloned into a standard SFG retroviral backbone vector56 containing the full-length WT1 TCR sequence. Sequences were codon optimized for expression in human cells and cloned plasmids were validated by Oxford Nanopore sequencing (Plasmidsaurus). TCR-retroviral supernatants were generated using 293T cells and co-transfection of the TCR-SFG, RDF and PegPam3 plasmids with GeneJuice Transfection Reagent (Sigma, 70967-5). Viral supernatants were harvested at 48- and 72-h post-transfection, snap frozen and stored at –80 °C. Transductions were performed using RetroNectin (Takara, T100A) according to the manufacturer’s recommendations.

TCR expression

TCRs were transduced into a genetically engineered Jurkat cell line (Promega, GA1182). The cell line is deficient in endogenous α and β chains (TCR-KO) and constitutively expresses both CD4 and CD8 co-receptors. Additionally, the TCR-KO Jurkats are engineered to express an NFAT-inducible luciferase reporter construct. Following transduction, TCR expression on the cell surface was evaluated by flow cytometry. Before staining, cells were incubated with 50-nM dasatinib for 30 min at 37 °C, shown to improve T cell staining57. TCR-Jurkats were then labelled with the following fluorochrome-labelled monoclonal antibodies: CD8-BV421 (BioLegend, 344748) and TCRα/β-PE (IP-26, BioLegend, 984702). Samples were also stained for viability using Live/Dead Fixable Near-IR (Thermo, L10119) and run on a BD Fortessa flow cytometer (BD Biosciences). Analysis was performed with FlowJo (v. 10.10.0)

T cell activation and luminescence read-out

To assess T cell activation, 4 × 105 TCR-T Jurkats were cultured in a 96-well plate for 6 h with peptide or DMSO-pulsed T2 cells at a 10:1 effector-to-target ratio. Before co-culture, T2 cells were pulsed overnight at 1 × 106 cells ml−1 supplemented with 10-μM peptide. Peptides were synthesized at GenScript with >95% purity (GenScript, custom). Luciferase expression was measured using the Bio-Glo-NL assay system (Promega, J3081) according to the manufacturer’s protocol. Luminescence was measured as relative luminescent units (RLUs) using a BioTek Synergy 2 microplate reader. All the reported values were normalized by subtracting the average luminescence values of the media control wells. Comparisons against the peptide-null control (DMSO) are reported as fold change values. Selected TCRs were also screened against a set of control peptides: HA-1 (VLRDDLLEA), a minor histocompatibility antigen commonly targeted in leukaemia, and CEFX Ultra SuperStim Pool MHC-I Subset (JPT, PM-CEFX-4), a mix of 80 class I bacterial and virally derived peptides known to react across a range of class I MHC alleles.

Statistics

Fisher’s exact test (one sided) was used to determine the P values for Pbidxn and Pmulti for quantifying the difference in number of polyspecific TCRs sampled. This was computed using the ‘scipy.stats’ Python library. Pairwise Student’s t-test was computed for tests of significance between peptide and DMSO controls for all biological validation data.



Source link

Related Topics:Adaptive immunityEngineeringgeneralMachine learning
Up Next

AI for Network Latency | Pipeline Magazine

Don't Miss

Global investors shift from US equities drawn by Asia’s AI boom, looser Fed policy: BofA

Dhuvarakesh Karthikeyan

Continue Reading

You may like

  • The AI trade will make or break your stock portfolio. Here’s how to win in 2025.

  • Efficiency, Ethics, and 2025 Outlook

  • How this Entrepreneur Built the Architecture for Rapid AI Experiments

  • Reinventing the eye exam with artificial intelligence

  • What is AI automation, and how can your business use it?

  • Unleashing the Power of Zero Copy

  • What is AI automation, and how can your business use it?

  • What is AI automation, and how can your business use it? – Caledonian Record

  • What is AI automation, and how can your business use it?

  • What is AI automation, and how can your business use it?

Click to comment

Leave a Reply

Cancel reply

Your email address will not be published. Required fields are marked *

AI Research

AI Transformation (AX) using artificial intelligence (AI) is spreading throughout the domestic finan..

Published

1 hour ago

on

September 14, 2025

By

The Editors


Getty Images Bank

AI Transformation (AX) using artificial intelligence (AI) is spreading throughout the domestic financial sector. Beyond simple digital transformation (DX), the strategy is to internalize AI across organizations and services to achieve management efficiency, work automation, and customer experience innovation at the same time. Financial companies are moving the judgment that it will be difficult to survive unless they raise their AI capabilities across the company in an environment where regulations and competition are intensifying. AX’s core is internal process innovation and customer service differentiation. AI can reduce costs and secure speed by quickly and accurately handling existing human-dependent tasks such as loan review, risk management, investment product recommendation, and internal counseling support.

At customer contact points, high-quality counseling is provided 24 hours a day through AI bankers, voice robots, and customized chatbots to increase financial service satisfaction. Industry sources say, “AX is not just a matter of technology, but a structural change that determines financial companies’ competitiveness and crisis response.”

First of all, major domestic banks and financial holding companies began to introduce in-house AI assistant and private large language model (LLM), establish a dedicated organization, and establish an AI governance system at the level of all affiliates. It is trying to automate internal work and differentiate customer services at the same time by establishing a strategic center at the group company level or introducing collaboration tools and AI platforms throughout the company.

KB Financial Group has established a ‘KB AI strategy’ and a ‘KB AI agent roadmap’ to introduce more than 250 AI agents to 39 core business areas of the group. It has established the ‘KB GenAI Portal’ for the first time in the financial sector to create an environment in which all executives and employees can utilize and develop AI without coding, and through this, it is efficiently changing work productivity and how they work.

Shinhan Financial Group is increasing work productivity with cloud-based collaboration tools (M365+Copilot) and introducing AI to the site by affiliates. Shinhan Bank placed Generative AI bankers at the window through the “AI Branch,” and in the application “SOL,” “AI Investment Mate” provides customized information to customers through card news.

사진설명

Hana Bank is operating a “foreign exchange company AI departure prediction system” using its foreign exchange expertise. It is a structure that analyzes 253 variables based on past transaction data to calculate the possibility of suspension of transactions and automatically guides branches to help preemptively respond.

Woori Financial Group established an AI strategy center within the holding under the leadership of Chairman Lim Jong-ryong and deployed AI-only organizations to all affiliates, including banks, cards, securities, and insurance.

Internet banks are trying to differentiate themselves by focusing on interactive search and calculation machines, forgery and alteration detection, customized recommendations, and spreading in-house AI culture. As there is no offline sales network, it is actively strengthening customer contact AI innovation such as app and mobile counseling.

Kakao Bank has upgraded its AI organization to a group and has more than 500 dedicated personnel. K-Bank achieved a 100% recognition rate with its identification card recognition solution using AI, and started to set standards by publishing papers to academia. Toss Bank uses AI to determine ID forgery and alteration (99.5% accuracy), automate mass document optical character recognition (OCR), convert counseling voice letters (STT), and build its own financial-specific language model.

Insurance companies are increasing accuracy, approval rate, and processing speed by introducing AI in the entire process of risk assessment, underwriting, and insurance payment. Due to the nature of the insurance industry, the effect of using AI is remarkable as the screening and payment process is long and complex.

Samsung Fire & Marine Insurance has more than halved the proportion of manpower review by automating the cancer diagnosis and surgical benefit review process through ‘AI medical review’. The machine learning-based “Long-Term Insurance Sickness Screening System” raised the approval rate from 71% to 90% and secured patents.

Industry experts view this AI transformation as a paradigm shift in the financial industry, not just the introduction of technology. It is necessary to create new added value and customer experiences beyond cost reduction and efficiency through AI. In particular, it is evaluated that the differentiation of financial companies will be strengthened only when AI and data are directly connected to resolving customer inconveniences.

However, preparing for ethical, security, and accountability issues is considered an essential task as much as the speed of AI’s spread. Failure to manage risks such as the impact of large language models on financial decision-making, personal information protection, and algorithmic bias can lead to loss of trust. This means that the process of developing accumulated experiences into industrial standards through small experiments is of paramount importance.

[Reporter Lee Soyeon]



Source link

Continue Reading

AI Research

Study shakes Silicon Valley: Researchers break AI

Published

2 hours ago

on

September 14, 2025

By

The Editors


Study shakes Silicon Valley: Researchers break AI | The Jerusalem Post

Jerusalem Post/Consumerism

Study shows researchers can manipulate chatbots with simple psychology, raising serious concerns about AI’s vulnerability and potential dangers.

ChatGPT encouraged a teenager toward suicide
ChatGPT encouraged a teenager toward suicide
(photo credit: OpenAI)
ByDR. ITAY GAL
SEPTEMBER 14, 2025 09:13






Source link

Continue Reading

AI Research

Password1: how scammers exploit variations of your logins | Money

Published

2 hours ago

on

September 14, 2025

By

Shane Hickey


The first you know about it is when you find out someone has accessed one of your accounts. You’ve been careful with your details so you can’t work out what has gone wrong, but you have made one mistake – recycling part of your password.

Reusing the same word in a password – even if it is altered to include numbers or symbols – gives criminals a way in to your accounts.

Brandyn Murtagh, an ethical “white hat” hacker, says information obtained through data breaches on sites such as DropBox and Tumblr and through cyber-attacks has been circulating on the internet for some time.

Hackers obtain passwords and test them out on other websites – a practice known as credential stuffing – to see whether they can break into accounts.

But in some cases they do not just try the exact passwords from the hacked data: as well as credential stuffing, the fraudsters also attempt to access accounts with derivations of the hacked password.

Research from Virgin Media O2 suggests four out of every five people use the same or nearly identical passwords on online accounts.

Using a slightly altered passwords – such as Guardian1 instead of Guardian – is almost an open door for hackers to compromise online accounts, Murtagh says.

Working with Virgin Media O2, he has shown volunteers how easy it is to trace their password when they supply their email address, often getting a result within minutes.

A spokesperson for Virgin Media O2 says: “Human behaviour is quite easy to model. [Criminals] know, for example, you might use one password and then add a full stop or an exclamation mark to the end.”

What the scam looks like

The criminals use scripts – automated sets of instructions for the computer – to go through variations of the passwords in an attempt to access other accounts. This can happen on an industrial scale, says Murtagh.

“It’s very rare that you are targeted as an individual – you are [usually] in a group of thousands of people that are getting targeted. These processes scale just like they would in business,” he says.

You might be alerted by messages saying that you have been trying to change your email address or other details connected to an account.

What to do

Change any passwords that are variations on the same word – Murtagh advises starting with the most important four sets of accounts: banks, email, work accounts and mobile.

Use a password managers – these are often integrated into web browsers. Apple has iCloud Keychain while Androids have Google Password Manager, both of which can suggest and save complicated passwords.

Put in place two-factor authentication or multi-factor authentication (2FA or MFA), which mean means you have two steps to log into a site.



Source link

Continue Reading

Trending

  • Business2 weeks ago

    The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial

  • Tools & Platforms1 month ago

    Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks

  • Ethics & Policy2 months ago

    SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية

  • Events & Conferences4 months ago

    Journey to 1000 models: Scaling Instagram’s recommendation system

  • Jobs & Careers3 months ago

    Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding

  • Podcasts & Talks2 months ago

    Happy 4th of July! 🎆 Made with Veo 3 in Gemini

  • Education2 months ago

    VEX Robotics launches AI-powered classroom robotics system

  • Education2 months ago

    Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics

  • Podcasts & Talks2 months ago

    OpenAI 🤝 @teamganassi

  • Funding & Business3 months ago

    Kayak and Expedia race to build AI travel agents that turn social posts into itineraries

aistoriz.com
  • Privacy Policy
  • Terms Of Service
  • Contact Us
  • The Travel Revolution of Our Era

Copyright © 2025 AISTORIZ. For enquiries email at prompt@aistoriz.com