108 results on '"Ken R. Duffy"'
Search Results
2. A Sub-0.8pJ/b 16.3Gbps/mm2 Universal Soft-Detection Decoder Using ORBGRAND in 40nm CMOS
- Author
-
Arslan Riaz, Alperen Yasar, Furkan Ercan, Wei An, Jonathan Ngo, Kevin Galligan, Muriel Medard, Ken R. Duffy, and Rabia Tugce Yazicigil
- Published
- 2023
3. Noise Recycling using GRAND for Improving the Decoding Performance
- Author
-
Arslan Riaz, Amit Solomon, Furkan Ercan, Muriel Medard, Rabia Tugce Yazicigil, and Ken R. Duffy
- Published
- 2023
4. PACESS: Practical AI-based Cell Extraction and Spatial Statistics for large 3D biological images
- Author
-
George Adams, Floriane S. Tissot, Chang Liu, Chris Brunsdon, Ken R. Duffy, and Cristina Lo Celso
- Abstract
Efficient methodologies to fully extract and analyse large datasets remain the Achilles heels of 3D tissue imaging. Here we present PACESS, a pipeline for large-scale data extraction and spatial statistical analysis from 3D biological images. First, using 3D object detection neural networks trained on annotated 2D data, we identify and classify the location of hundreds of thousands of cells contained in large biological images. Then, we introduce a series of statistical techniques tailored to work with spatial data, resulting in a 3D statistical map of the tissue from which multi-cellular interactions can be clearly understood. As illustration of the power of this new approach, we apply this analysis pipeline to an organ known to have a complex and still poorly understood cellular structure: the bone marrow. The analysis reveals coherent, useful biological information on multiple cell population interactions. This novel and powerful spatial analysis pipeline can be broadly used to unravel complex multi-cellular interaction towards unlocking tissue complexity.
- Published
- 2022
5. A General Security Approach for Soft-information Decoding against Smart Bursty Jammers
- Author
-
Furkan Ercan, Kevin Galligan, Ken R. Duffy, Muriel Medard, David Starobinski, and Rabia Tugce Yazicigil
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Cryptography and Security (cs.CR) - Abstract
Malicious attacks such as jamming can cause significant disruption or complete denial of service (DoS) to wireless communication protocols. Moreover, jamming devices are getting smarter, making them difficult to detect. Forward error correction, which adds redundancy to data, is commonly deployed to protect communications against the deleterious effects of channel noise. Soft-information error correction decoders obtain reliability information from the receiver to inform their decoding, but in the presence of a jammer such information is misleading and results in degraded error correction performance. As decoders assume noise occurs independently to each bit, a bursty jammer will lead to greater degradation in performance than a non-bursty one. Here we establish, however, that such temporal dependencies can aid inferences on which bits have been subjected to jamming, thus enabling counter-measures. In particular, we introduce a pre-decoding processing step that updates log-likelihood ratio (LLR) reliability information to reflect inferences in the presence of a jammer, enabling improved decoding performance for any soft detection decoder. The proposed method requires no alteration to the decoding algorithm. Simulation results show that the method correctly infers a significant proportion of jamming in any received frame. Results with one particular decoding algorithm, the recently introduced ORBGRAND, show that the proposed method reduces the block-error rate (BLER) by an order of magnitude for a selection of codes, and prevents complete DoS at the receiver., Accepted for GLOBECOM 2022 Workshops. Contains 7 pages and 7 figures
- Published
- 2022
6. Evidentiary evaluation of single cells renders highly informative forensic comparisons across multifarious admixtures
- Author
-
Ken R. Duffy, Desmond S. Lun, Madison M. Mulcahy, Leah O’Donnell, Nidhi Sheth, and Catherine M. Grgicak
- Subjects
Genetics ,Pathology and Forensic Medicine - Published
- 2023
7. Interleaved Noise Recycling using GRAND
- Author
-
Arslan Riaz, Amit Solomon, Furkan Ercan, Muriel Medard, Rabia Tugce Yazicigil, and Ken R. Duffy
- Published
- 2022
8. Towards developing forensically relevant single-cell pipelines by incorporating direct-to-PCR extraction: compatibility, signal quality, and allele detection
- Author
-
Amanda J. Gonzalez, Harish Swaminathan, Nidhi Sheth, Ken R. Duffy, and Catherine M. Grgicak
- Subjects
Detection limit ,010401 analytical chemistry ,Extraction (chemistry) ,Buccal swab ,01 natural sciences ,0104 chemical sciences ,Pathology and Forensic Medicine ,Electropherogram ,03 medical and health sciences ,Forensic dna ,0302 clinical medicine ,Compatibility (mechanics) ,Microsatellite ,030216 legal & forensic medicine ,Allele ,Biological system ,Mathematics - Abstract
Current analysis of forensic DNA stains relies on the probabilistic interpretation of bulk-processed samples that represent mixed profiles consisting of an unknown number of potentially partial representations of each contributor. Single-cell methods, in contrast, offer a solution to the forensic DNA mixture problem by incorporating a step that separates cells before extraction. A forensically relevant single-cell pipeline relies on efficient direct-to-PCR extractions that are compatible with standard down- stream forensic reagents. Here we demonstrate the feasibility of implementing single-cell pipelines into the forensic process by exploring four metrics of electropherogram (EPG) signal quality—i.e., allele detection rates, peak heights, peak height ratios, and peak height balance across low- to high-molecular-weight short tandem repeat (STR) markers—obtained with four direct-to-PCR extraction treatments and a common post-PCR laboratory procedure. Each treatment was used to extract DNA from 102 single buccal cells, whereupon the amplification reagents were immediately added to the tube and the DNA was amplified/injected using post-PCR conditions known to elicit a limit of detection (LoD) of one DNA molecule. The results show that most cells, regardless of extraction treatment, rendered EPGs with at least a 50% true positive allele detection rate and that allele drop-out was not cell independent. Statistical tests demonstrated that extraction treatments significantly impacted all metrics of EPG quality, where the Arcturus® PicoPure™ extraction method resulted in the lowest median allele drop-out rate, highest median average peak height, highest median average peak height ratio, and least negative median values of EPG sloping for GlobalFiler™ STR loci amplified at half volume. We, therefore, conclude the feasibility of implementing single-cell pipelines for casework purposes and demonstrate that inferential systems assuming cell independence will not be appropriate in the probabilistic interpretation of a collection of single-cell EPGs.
- Published
- 2021
9. A Universal Maximum Likelihood GRAND Decoder in 40nm CMOS
- Author
-
Arslan Riaz, Muriel Medard, Ken R. Duffy, and Rabia Tugce Yazicigil
- Published
- 2022
10. Partial Encryption after Encoding for Security and Reliability in Data Systems
- Author
-
Alejandro Cohen, Rafael G. L. D'Oliveira, Ken R. Duffy, and Muriel Medard
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Cryptography and Security (cs.CR) - Abstract
We consider the problem of secure and reliable communication over a noisy multipath network. Previous work considering a noiseless version of our problem proposed a hybrid universal network coding cryptosystem (HUNCC). By combining an information-theoretically secure encoder together with partial encryption, HUNCC is able to obtain security guarantees, even in the presence of an all-observing eavesdropper. In this paper, we propose a version of HUNCC for noisy channels (N-HUNCC). This modification requires four main novelties. First, we present a network coding construction which is jointly, individually secure and error-correcting. Second, we introduce a new security definition which is a computational analogue of individual security, which we call individual indistinguishability under chosen ciphertext attack (individual IND-CCA1), and show that NHUNCC satisfies it. Third, we present a noise based decoder for N-HUNCC, which permits the decoding of the encoded-thenencrypted data. Finally, we discuss how to select parameters for N-HUNCC and its error-correcting capabilities.
- Published
- 2022
- Full Text
- View/download PDF
11. Block turbo decoding with ORBGRAND
- Author
-
Kevin Galligan, Muriel Médard, and Ken R. Duffy
- Subjects
FOS: Computer and information sciences ,Computer Science - Information Theory ,Information Theory (cs.IT) - Abstract
Guessing Random Additive Noise Decoding (GRAND) is a family of universal decoding algorithms suitable for decoding any moderate redundancy code of any length. We establish that, through the use of list decoding, soft-input variants of GRAND can replace the Chase algorithm as the component decoder in the turbo decoding of product codes. In addition to being able to decode arbitrary product codes, rather than just those with dedicated hard-input component code decoders, results show that ORBGRAND achieves a coding gain of up to 0.7dB over the Chase algorithm with same list size.
- Published
- 2022
- Full Text
- View/download PDF
12. Physical layer insecurity
- Author
-
Muriel Médard and Ken R. Duffy
- Subjects
FOS: Computer and information sciences ,Computer Science - Information Theory ,Information Theory (cs.IT) - Abstract
In the classic wiretap model, Alice wishes to reliably communicate to Bob without being overheard by Eve who is eavesdropping over a degraded channel. Systems for achieving that physical layer security often rely on an error correction code whose rate is below the Shannon capacity of Alice and Bob's channel, so Bob can reliably decode, but above Alice and Eve's, so Eve cannot reliably decode. For the finite block length regime, several metrics have been proposed to characterise information leakage. Here we assess a new metric, the success exponent, and demonstrate it can be operationalized through the use of Guessing Random Additive Noise Decoding (GRAND) to compromise the physical-layer security of any moderate length code. Success exponents are the natural beyond-capacity analogue of error exponents that characterise the probability that a maximum likelihood decoding is correct when the code-rate is above Shannon capacity, which is exponentially decaying in the code-length. Success exponents can be used to approximately evaluate the frequency with which Eve's decoding is correct in beyond-capacity channel conditions. Through the use of GRAND, we demonstrate that Eve can constrain her decoding procedure so that when she does identify a decoding, it is correct with high likelihood, significantly compromising Alice and Bob's communication by truthfully revealing a proportion of it. We provide general mathematical expressions for the determination of success exponents as well as for the evaluation of Eve's query number threshold, using the binary symmetric channel as a worked example. As GRAND algorithms are code-book agnostic and can decode any code structure, we provide empirical results for Random Linear Codes as exemplars. Simulation results demonstrate the practical possibility of compromising physical layer security.
- Published
- 2022
- Full Text
- View/download PDF
13. IGRAND: decode any product code
- Author
-
Kevin Galligan, Amit Solomon, Arslan Riaz, Muriel Medard, Rabia T. Yazicigil, and Ken R. Duffy
- Published
- 2021
14. High-quality data from a forensically relevant single-cell pipeline enabled by low PBS and proteinase K concentrations
- Author
-
Nidhi Sheth, Ken R. Duffy, and Catherine M. Grgicak
- Subjects
Forensic Genetics ,Genetics ,DNA ,Endopeptidase K ,DNA Fingerprinting ,Polymerase Chain Reaction ,Alleles ,Pathology and Forensic Medicine ,Microsatellite Repeats - Abstract
Interpreting forensic DNA signal is arduous since the total intensity is a cacophony of signal from noise, artifact, and allele from an unknown number of contributors (NOC). An alternate to traditional bulk- processing pipelines is a single-cell one, where the sample is collected, and each cell is sequestered resulting in n single-source, single-cell EPGs (scEPG) that must be interpreted using applicable strategies. As with all forensic DNA interpretation strategies, high quality electropherograms are required; thus, to enhance the credibility of single-cell forensics, it is necessary to produce an efficient direct-to- PCR treatment that is compatible with prevailing downstream labo-ratory processes.We incorporated the semi-automated micro-fluidic DEPArray™ technology into the single-cell laboratory and optimized its implementation by testing the effects of four laboratory treatments on single-cell profiles. We focused on testing effects of phosphate buffer saline (PBS) since it is an important reagent that mitigates cell rup-ture but is also a PCR inhibitor. Specifically, we explored the effect of decreasing PBS concentrations on five electropherogram-quality metrics from 241 leukocytes: profile drop- out, allele drop-out, allele peak heights, peak height ratios, and scEPG sloping. In an effort to improve reagent use, we also assessed two concentrations of proteinase K. The results indicate that decreasing PBS concentrations to 0.5X or 0.25X improves scEPG quality, while modest modifications to proteinase K concentrations did not significantly impact it. We, therefore, conclude that a lower than recommended pro-teinase K concentration coupled with a lower than recommended PBS concentration results in enhanced scEPGs within the semi-automated single-cell pipeline.
- Published
- 2021
15. Lineage tracing reveals B cell antibody class switching is stochastic, cell-autonomous, and tuneable
- Author
-
Miles B. Horton, HoChan Cheon, Ken R. Duffy, Daniel Brown, Shalin H. Naik, Carolina Alvarado, Joanna R. Groom, Susanne Heinzel, and Philip D. Hodgkin
- Subjects
Immunoglobulin Isotypes ,Recombination, Genetic ,B-Lymphocytes ,Infectious Diseases ,Cytidine Deaminase ,Immunology ,Immunology and Allergy ,Immunoglobulin Class Switching - Abstract
To optimize immunity to pathogens, B lymphocytes generate plasma cells with functionally diverse antibody isotypes. By lineage tracing single cells within differentiating B cell clones, we identified the heritability of discrete fate controlling mechanisms to inform a general mathematical model of B cell fate regulation. Founder cells highly influenced clonal plasma-cell fate, whereas class switch recombination (CSR) was variegated within clones. In turn, these CSR patterns resulted from independent all-or-none expression of both activation-induced cytidine deaminase (AID) and IgH germline transcription (GLT), with the latter being randomly re-expressed after each cell division. A stochastic model premised on these molecular transition rules accurately predicted antibody switching outcomes under varied conditions in vitro and during an immune response in vivo. Thus, the generation of functionally diverse antibody types follows rules of autonomous cellular programming that can be adapted and modeled for the rational control of antibody classes for potential therapeutic benefit.
- Published
- 2022
16. HSPCs display within-family homogeneity in differentiation and proliferation despite population heterogeneity
- Author
-
Ken R. Duffy, Tamar Tak, Leïla Perié, Caroline Marty, Gaël Simon, Camélia Benlabiod, Isabelle Plo, Giulio Prevedello, Noémie Paillon, Institut Curie [Paris], Laboratoire Physico-Chimie Curie [Institut Curie] (PCC), Institut Curie [Paris]-Institut de Chimie du CNRS (INC)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Centre de Formation et de Recherche sur les Environnements Méditérranéens (CEFREM), Université de Perpignan Via Domitia (UPVD)-Institut national des sciences de l'Univers (INSU - CNRS)-Centre National de la Recherche Scientifique (CNRS), Institut Gustave Roussy (IGR), Dynamique moléculaire de la transformation hématopoïétique (Dynamo), Institut Gustave Roussy (IGR)-Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Paris-Saclay, Hamilton Institute [NUI], National University of Ireland [Galway] (NUI Galway), and Université de Perpignan Via Domitia (UPVD)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
0301 basic medicine ,concordance ,Cell type ,Mouse ,QH301-705.5 ,progenitor cell ,Cellular differentiation ,Science ,[SDV]Life Sciences [q-bio] ,Bone Marrow Cells ,Biology ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,Mice ,0302 clinical medicine ,Bone Marrow ,medicine ,Animals ,Progenitor cell ,Biology (General) ,Cells, Cultured ,Ancestor ,[PHYS]Physics [physics] ,General Immunology and Microbiology ,Cluster of differentiation ,General Neuroscience ,Cell Differentiation ,General Medicine ,medicine.disease ,Hematopoietic Stem Cells ,Stem Cells and Regenerative Medicine ,Mice, Inbred C57BL ,Haematopoiesis ,Leukemia ,030104 developmental biology ,cell proliferation ,Evolutionary biology ,Medicine ,haematopoietic stem ,Stem cell ,030217 neurology & neurosurgery ,Research Article - Abstract
International audience; High-throughput single-cell methods have uncovered substantial heterogeneity in the pool of hematopoietic stem and progenitor cells (HSPCs), but how much instruction is inherited by offspring from their heterogeneous ancestors remains unanswered. Using a method that enables simultaneous determination of common ancestor, division number, and differentiation status of a large collection of single cells, our data revealed that murine cells that derived from a common ancestor had significant similarities in their division progression and differentiation outcomes. Although each family diversifies, the overall collection of cell types observed is composed of homogeneous families. Heterogeneity between families could be explained, in part, by differences in ancestral expression of cell surface markers. Our analyses demonstrate that fate decisions of cells are largely inherited from ancestor cells, indicating the importance of common ancestor effects. These results may have ramifications for bone marrow transplantation and leukemia, where substantial heterogeneity in HSPC behavior is observed.
- Published
- 2021
17. Author response: HSPCs display within-family homogeneity in differentiation and proliferation despite population heterogeneity
- Author
-
Noémie Paillon, Tamar Tak, Giulio Prevedello, Gaël Simon, Caroline Marty, Ken R. Duffy, Camélia Benlabiod, Isabelle Plo, and Leïla Perié
- Subjects
Evolutionary biology ,Homogeneity (statistics) ,Population Heterogeneity ,Biology - Published
- 2021
18. Network Infusion to Infer Information Sources in Networks
- Author
-
Gerald Quon, Ken R. Duffy, Manolis Kellis, Soheil Feizi, and Muriel Medard
- Subjects
FOS: Computer and information sciences ,Physics - Physics and Society ,Computer Networks and Communications ,Computer science ,FOS: Physical sciences ,Inference ,Physics and Society (physics.soc-ph) ,02 engineering and technology ,computer.software_genre ,0202 electrical engineering, electronic engineering, information engineering ,Diffusion (business) ,Social and Information Networks (cs.SI) ,Social network ,business.industry ,Computer Science - Social and Information Networks ,020206 networking & telecommunications ,Inversion (meteorology) ,Inverse problem ,Computer Science Applications ,Control and Systems Engineering ,Homogeneous ,020201 artificial intelligence & image processing ,Minification ,Data mining ,business ,Centrality ,computer - Abstract
Several significant models have been developed that enable the study of diffusion of signals across biological, social and engineered networks. Within these established frameworks, the inverse problem of identifying the source of the propagated signal is challenging, owing to the numerous alternative possibilities for signal progression through the network. In real world networks, the challenge of determining sources is compounded as the true propagation dynamics are typically unknown, and when they have been directly measured, they rarely conform to the assumptions of any of the well-studied models. In this paper we introduce a method called Network Infusion (NI) that has been designed to circumvent these issues, making source inference practical for large, complex real world networks. The key idea is that to infer the source node in the network, full characterization of diffusion dynamics, in many cases, may not be necessary. This objective is achieved by creating a diffusion kernel that well-approximates standard diffusion models, but lends itself to inversion, by design, via likelihood maximization or error minimization. We apply NI for both single-source and multi-source diffusion, for both single-snapshot and multi-snapshot observations, and for both homogeneous and heterogeneous diffusion setups. We prove the mean-field optimality of NI for different scenarios, and demonstrate its effectiveness over several synthetic networks. Moreover, we apply NI to a real-data application, identifying news sources in the Digg social network, and demonstrate the effectiveness of NI compared to existing methods. Finally, we propose an integrative source inference framework that combines NI with a distance centrality-based method, which leads to a robust performance in cases where the underlying dynamics are unknown., Comment: 21 pages, 13 figures
- Published
- 2019
19. Author Correction: Replicative history marks transcriptional and functional disparity in the CD8+ T cell memory pool
- Author
-
Kaspar Bresser, Lianne Kok, Arpit C. Swain, Lisa A. King, Laura Jacobs, Tom S. Weber, Leïla Perié, Ken R. Duffy, Rob J. de Boer, Ferenc A. Scheeren, and Ton N. Schumacher
- Subjects
Immunology ,Immunology and Allergy - Published
- 2022
20. CRC Codes as Error Correction Codes
- Author
-
Ken R. Duffy, Wei An, and Muriel Medard
- Subjects
FOS: Computer and information sciences ,Block code ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Short Code ,Data_CODINGANDINFORMATIONTHEORY ,Code (cryptography) ,Latency (engineering) ,Error detection and correction ,Encoder ,computer ,Algorithm ,BCH code ,Decoding methods ,computer.programming_language - Abstract
CRC codes have long since been adopted in a vast range of applications. The established notion that they are suitable primarily for error detection can be set aside through use of the recently proposed Guessing Random Additive Noise Decoding (GRAND). Hard-detection (GRAND-SOS) and soft-detection (ORBGRAND) variants can decode any short, high-rate block code, making them suitable for error correction of CRC-coded data. When decoded with GRAND, short CRC codes have error correction capability that is at least as good as popular codes such as BCH codes, but with no restriction on either code length or rate. The state-of-the-art CA-Polar codes are concatenated CRC and Polar codes. For error correction, we find that the CRC is a better short code than either Polar or CA-Polar codes. Moreover, the standard CA-SCL decoder only uses the CRC for error detection and therefore suffers severe performance degradation in short, high rate settings when compared with the performance GRAND provides, which uses all of the CA-Polar bits for error correction. Using GRAND, existing systems can be upgraded from error detection to low-latency error correction without re-engineering the encoder, and additional applications of CRCs can be found in IoT, Ultra-Reliable Low Latency Communication (URLLC), and beyond. The universality of GRAND, its ready parallelized implementation in hardware, and the good performance of CRC as codes make their combination a viable solution for low-latency applications., This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
- Published
- 2021
21. Replicative history marks transcriptional and functional disparity in the CD8+ T cell memory pool
- Author
-
Kaspar Bresser, King L, Swain A, Leïla Perié, Kok L, Scheeren F, Tom S. Weber, Ton N. Schumacher, Ken R. Duffy, Jacobs L, and Boer Rd
- Subjects
Text mining ,business.industry ,Memory pool ,Cytotoxic T cell ,Computational biology ,Biology ,business - Abstract
Clonal expansion is a core aspect of T cell immunity. However, little is known with respect to the relationship between replicative history and the formation of distinct CD8+ memory T cell subgroups. To address this issue, we developed a genetic-tracing approach, termed the DivisionRecorder, that reports the extent of past proliferation of cell pools in vivo. Using this system to genetically ‘record’ the replicative history of different CD8+ T cell populations throughout a pathogen-specific immune response, we demonstrate that the central memory T cell (TCM) pool is marked by a higher number of prior divisions than the effector memory T cell pool, due to the combination of strong proliferative activity during the acute immune response and selective proliferative activity after pathogen clearance. Furthermore, by combining DivisionRecorder analysis with single cell transcriptomics and functional experiments, we show that replicative history identifies distinct cell pools within the TCM compartment. Specifically, we demonstrate that lowly divided TCM display enriched expression of stem-cell-associated genes, and that such lowly divided cells are superior in eliciting a proliferative recall response. The latter data provide the first evidence that a stem cell like memory T cell pool that reconstitutes the CD8+ T cell effector pool upon reinfection is marked by prior quiescence.
- Published
- 2021
22. Replicative history marks transcriptional and functional disparity in the CD8
- Author
-
Kaspar, Bresser, Lianne, Kok, Arpit C, Swain, Lisa A, King, Laura, Jacobs, Tom S, Weber, Leïla, Perié, Ken R, Duffy, Rob J, de Boer, Ferenc A, Scheeren, and Ton N, Schumacher
- Subjects
CD8-Positive T-Lymphocytes ,Immunologic Memory - Abstract
Clonal expansion is a core aspect of T cell immunity. However, little is known with respect to the relationship between replicative history and the formation of distinct CD8
- Published
- 2021
23. MDS coding is better than replication for job completion times
- Author
-
Ken R. Duffy and Seva Shneer
- Subjects
021103 operations research ,business.industry ,Computer science ,Applied Mathematics ,Multiplicative function ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Server ,Data_FILES ,0101 mathematics ,business ,Software ,Computer network ,Coding (social sciences) - Abstract
If queue-states are unknown to a dispatcher, a replication strategy has been proposed where job copies are sent randomly to servers. When the service of any copy is completed, redundant copies are removed. For jobs that can be algebraically manipulated and have linear servers, to get the response-time performance of sending d copies of n jobs via the replication strategy, with Maximum Distance Separable (MDS) codes we show that n + d jobs suffice. That is, while replication is multiplicative, MDS is linear.
- Published
- 2021
24. Inferring Differentiation Order in Adaptive Immune Responses from Population-Level Data
- Author
-
Philip D. Hodgkin, Alexander S. Miles, and Ken R. Duffy
- Subjects
education.field_of_study ,Order (biology) ,Lineage (genetic) ,Immune system ,Effector ,Population ,Biology ,Acquired immune system ,education ,Pathogen ,CD8 ,Cell biology - Abstract
During an adaptive immune response, a population of naive lymphocytes becomes heterogeneous, containing a population of effector cells that clear the pathogen and then die during the contraction phase and a population of memory cells that are of a longer lineage. For CD8+ T cells, many theories have been proposed for the dynamics of this differentiation.
- Published
- 2021
25. Multi-Code Multi-Rate Universal Maximum Likelihood Decoder using GRAND
- Author
-
Arslan Riaz, Vaibhav Bansal, Rabia Tugce Yazicigil, Qijun Liu, Wei An, Muriel Medard, Amit Solomon, Ken R. Duffy, and Kevin Galligan
- Subjects
Downtime ,Noise ,CMOS ,business.industry ,Computer science ,Code (cryptography) ,Latency (audio) ,Clock gating ,business ,Throughput (business) ,Computer hardware ,Decoding methods - Abstract
We present the first fully-integrated universal Max- imum Likelihood decoder in 40 nm CMOS using the Guessing Random Additive Noise Decoding (GRAND) algorithm for low- power applications. The 0.83 mm2 multi-code multi-rate universal decoder can efficiently decode any code of length up to 128 bits with 1 μs latency at 68 MHz. Dynamic clock gating leveraging noise statistics reduces the average power dissipation to 3.75 mW at 1.1 V or 30.6 pJ/decoded bit with a throughput of 122.6 Mb/s. Universal decoding reduces hardware footprint, and the design allows seamless swapping between codebooks with no downtime, enabling use by multiple applications without switch-over.
- Published
- 2021
26. Managing Noise and Interference Separately - Multiple Access Channel Decoding using Soft GRAND
- Author
-
Ken R. Duffy, Amit Solomon, and Muriel Medard
- Subjects
Noise ,Redundancy (information theory) ,Interference (communication) ,Single antenna interference cancellation ,Computer science ,Maximum a posteriori estimation ,Data_CODINGANDINFORMATIONTHEORY ,Joint (audio engineering) ,Algorithm ,Decoding methods ,Communication channel ,Computer Science::Information Theory - Abstract
Two main problems arise in the Multiple Access Channel (MAC): interference from different users, and additive noise channel noise. Maximum A-Posteriori (MAP) joint decoding or successive interference cancellation are known to be capacity-achieving for the MAC when paired with appropriate codes. We extend the recently proposed Soft Guessing Random Additive Noise Decoder (SGRAND) to guess, using soft information, the effect of noise on the sum of users' transmitted codewords. Next, we manage interference by applying ZigZag decoding over the resulting putative noiseless MAC to obtain candidate codewords. Guessing continues until the candidate codewords thus obtained pertain to the corresponding users' codebooks. This MAC SGRAND decoder is a MAP decoder that requires no coordination between users, who can use arbitrary moderate redundancy short length codes of different types and rates.
- Published
- 2021
27. The a posteriori probability of the number of contributors when conditioned on an assumed contributor
- Author
-
Desmond S. Lun, Catherine M. Grgicak, and Ken R. Duffy
- Subjects
Nuisance variable ,Computer science ,Bayesian probability ,Probabilistic logic ,Bayes Theorem ,Sample (statistics) ,DNA ,DNA Fingerprinting ,Pathology and Forensic Medicine ,Electropherogram ,Genetics ,Range (statistics) ,Humans ,Point estimation ,Set (psychology) ,Algorithm ,Alleles - Abstract
Forensic DNA signal is notoriously challenging to assess, requiring computational tools to support its interpretation. Over-expressions of stutter, allele drop-out, allele drop-in, degradation, differential degradation, and the like, make forensic DNA profiles too complicated to evaluate by manual methods. In response, computational tools that make point estimates on the Number of Contributors (NOC) to a sample have been developed, as have Bayesian methods that evaluate an A Posteriori Probability (APP) distribution on the NOC. In cases where an overly narrow NOC range is assumed, the downstream strength of evidence may be incomplete insofar as the evidence is evaluated with an inadequate set of propositions. In the current paper, we extend previous work on NOCIt, a Bayesian method that determines an APP on the NOC given an electropherogram, by reporting on an implementation where the user can add assumed contributors. NOCIt is a continuous system that incorporates models of peak height (including degradation and differential degradation), forward and reverse stutter, noise, and allelic drop-out, while being cognizant of allele frequencies in a reference population. When conditioned on a known contributor, we found that the mode of the APP distribution can shift to one greater when compared with the circumstance where no known contributor is assumed, and that occurred most often when the assumed contributor was the minor constituent to the mixture. In a development of a result of Slooten and Caliebe (FSI:G, 2018) that, under suitable assumptions, establishes the NOC can be treated as a nuisance variable in the computation of a likelihood ratio between the prosecution and defense hypotheses, we show that this computation must not only use coincident models, but also coincident contextual information. The results reported here, therefore, illustrate the power of modern probabilistic systems to assess full weights-of-evidence, and to provide information on reasonable NOC ranges across multiple contexts.
- Published
- 2021
28. Cyton2: A Model of Immune Cell Population Dynamics That Includes Familial Instructional Inheritance
- Author
-
HoChan Cheon, Andrey Kan, Giulio Prevedello, Simone C. Oostindie, Simon J. Dovedi, Edwin D. Hawkins, Julia M. Marchingo, Susanne Heinzel, Ken R. Duffy, and Philip D. Hodgkin
- Subjects
0303 health sciences ,education.field_of_study ,proliferation ,Computer applications to medicine. Medical informatics ,Cell ,Population ,R858-859.7 ,Biology ,immune response ,03 medical and health sciences ,Inheritance (object-oriented programming) ,0302 clinical medicine ,medicine.anatomical_structure ,Immune system ,familial correlations ,Evolutionary biology ,population dynamics ,medicine ,education ,mathematical model ,030304 developmental biology ,030215 immunology - Abstract
Lymphocytes are the central actors in adaptive immune responses. When challenged with antigen, a small number of B and T cells have a cognate receptor capable of recognising and responding to the insult. These cells proliferate, building an exponentially growing, differentiating clone army to fight off the threat, before ceasing to divide and dying over a period of weeks, leaving in their wake memory cells that are primed to rapidly respond to any repeated infection. Due to the non-linearity of lymphocyte population dynamics, mathematical models are needed to interrogate data from experimental studies. Due to lack of evidence to the contrary and appealing to arguments based on Occam’s Razor, in these models newly born progeny are typically assumed to behave independently of their predecessors. Recent experimental studies, however, challenge that assumption, making clear that there is substantial inheritance of timed fate changes from each cell by its offspring, calling for a revision to the existing mathematical modelling paradigms used for information extraction. By assessing long-term live-cell imaging of stimulated murine B and T cells in vitro, we distilled the key phenomena of these within-family inheritances and used them to develop a new mathematical model, Cyton2, that encapsulates them. We establish the model’s consistency with these newly observed fine-grained features. Two natural concerns for any model that includes familial correlations would be that it is overparameterised or computationally inefficient in data fitting, but neither is the case for Cyton2. We demonstrate Cyton2’s utility by challenging it with high-throughput flow cytometry data, which confirms the robustness of its parameter estimation as well as its ability to extract biological meaning from complex mixed stimulation experiments. Cyton2, therefore, offers an alternate mathematical model, one that is, more aligned to experimental observation, for drawing inferences on lymphocyte population dynamics.
- Published
- 2021
- Full Text
- View/download PDF
29. Decision letter: Cell-density independent increased lymphocyte production and loss rates post-autologous HSCT
- Author
-
Andrew J. Yates and Ken R. Duffy
- Subjects
business.industry ,Immunology ,Cell density ,Medicine ,Autologous hsct ,Lymphocyte production ,business - Published
- 2020
30. Noise Recycling
- Author
-
Amit Solomon, Alejandro Cohen, Muriel Medard, Ken R. Duffy, and Massachusetts Institute of Technology. Research Laboratory of Electronics
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Science - Information Theory ,Information Theory (cs.IT) ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Signal ,Block Error Rate ,Noise ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Joint (audio engineering) ,Realization (systems) ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Communication channel - Abstract
We introduce Noise Recycling, a method that enhances decoding performance of channels subject to correlated noise without joint decoding. The method can be used with any combination of codes, code-rates and decoding techniques. In the approach, a continuous realization of noise is estimated from a lead channel by subtracting its decoded output from its received signal. This estimate is then used to improve the accuracy of decoding of an orthogonal channel that is experiencing correlated noise. In this design, channels aid each other only through the provision of noise estimates post-decoding. In a Gauss-Markov model of correlated noise, we constructive establish that noise recycling employing a simple successive order enables higher rates than not recycling noise. Simulations illustrate noise recycling can be employed with any code and decoder, and that noise recycling shows Block Error Rate (BLER) benefits when applying the same predetermined order as used to enhance the rate region. Finally, for short codes we establish that an additional BLER improvement is possible through noise recycling with racing, where the lead channel is not pre-determined, but is chosen on the fly based on which decoder completes first., Comment: Appear in IEEE International Symposium on Information Theory, ISIT 2020, based on arXiv:2006.04897
- Published
- 2020
31. Soft Maximum Likelihood Decoding using GRAND
- Author
-
Ken R. Duffy, Muriel Medard, Amit Solomon, Massachusetts Institute of Technology. Research Laboratory of Electronics, and Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
- Subjects
Block code ,FOS: Computer and information sciences ,Computer science ,Maximum likelihood ,Information Theory (cs.IT) ,Computer Science - Information Theory ,05 social sciences ,050801 communication & media studies ,Data_CODINGANDINFORMATIONTHEORY ,Noise ,0508 media and communications ,Control channel ,Cyclic redundancy check ,0502 economics and business ,050211 marketing ,Forward error correction ,Algorithm ,5G ,Decoding methods ,Computer Science::Information Theory - Abstract
© 2020 IEEE. Maximum Likelihood (ML) decoding of forward error correction codes is known to be optimally accurate, but is not used in practice as it proves too challenging to efficiently implement. Here we propose a development of a previously described hard detection ML decoder called Guessing Random Additive Noise Decoding (GRAND). We introduce Soft GRAND (SGRAND), a ML decoder that fully avails of soft detection information and is suitable for use with any arbitrary high-rate, short-length block code. We assess SGRAND's performance on Cyclic Redundancy Check (CRC)-aided Polar (CA-Polar) codes, which will be used for all control channel communication in 5G New Radio (NR), comparing its accuracy with CRC-Aided Successive Cancellation List decoding (CA-SCL), a state-of-theart soft-information decoder specific to CA-Polar codes.
- Published
- 2020
32. Ordered Reliability Bits Guessing Random Additive Noise Decoding
- Author
-
Ken R. Duffy
- Subjects
FOS: Computer and information sciences ,Signal processing ,Computer science ,Reliability (computer networking) ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Block error ,Noise ,Block Error Rate ,Redundancy (information theory) ,State (computer science) ,Algorithm ,Decoding methods ,BCH code - Abstract
Modern applications are driving demand for ultra-reliable low-latency communications, rekindling interest in the performance of short, high-rate error correcting codes. To that end, here we introduce a soft-detection variant of Guessing Random Additive Noise Decoding (GRAND) called Ordered Reliability Bits GRAND that can decode any short, high-rate block-code. For a code of $n$ bits, it avails of no more than $\lceil\log_2(n)\rceil$ bits of code-book-independent quantized soft detection information per received bit to determine an accurate decoding while retaining the original algorithm's suitability for a highly parallelized implementation in hardware. ORBGRAND is shown to provide similar block error performance for codes of distinct classes (BCH, CA-Polar and RLC) with low complexity, while providing better block error rate performance than CA-SCL, a state of the art soft detection CA-Polar decoder.
- Published
- 2020
- Full Text
- View/download PDF
33. Discrete convolution statistic for hypothesis testing
- Author
-
Ken R. Duffy and Giulio Prevedello
- Subjects
Statistics and Probability ,021103 operations research ,Distribution (number theory) ,Maximum likelihood ,0211 other engineering and technologies ,Linear model ,62G05, 62G10, 62G20, 62P10, 62P20 ,Mathematics - Statistics Theory ,02 engineering and technology ,Statistics Theory (math.ST) ,16. Peace & justice ,01 natural sciences ,Convolution ,010104 statistics & probability ,FOS: Mathematics ,Applied mathematics ,0101 mathematics ,Random variable ,Statistic ,Mathematics ,Statistical hypothesis testing - Abstract
The question of testing for equality in distribution between two linear models, each consisting of sums of distinct discrete independent random variables with unequal numbers of observations, has emerged from the biological research. In this case, the computation of classical $\chi^2$ statistics, which would not include all observations, results in loss of power, especially when sample sizes are small. Here, as an alternative that uses all data, the nonparametric maximum likelihood estimator for the distribution of sum of discrete and independent random variables, which we call the convolution statistic, is proposed and its limiting normal covariance matrix determined. To challenge null hypotheses about the distribution of this sum, the generalized Wald's method is applied to define a testing statistic whose distribution is asymptotic to a $\chi^2$ with as many degrees of freedom as the rank of such covariance matrix. Rank analysis also reveals a connection with the roots of the probability generating functions associated to the addend variables of the linear models. A simulation study is performed to compare the convolution test with Pearson's $\chi^2$, and to provide usage guidelines.
- Published
- 2020
- Full Text
- View/download PDF
34. Keep the bursts and ditch the interleavers
- Author
-
Ken R. Duffy, Wei An, and Muriel Medard
- Subjects
FOS: Computer and information sciences ,Interleaving ,Computer science ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Markov process ,020206 networking & telecommunications ,Throughput ,02 engineering and technology ,Data_CODINGANDINFORMATIONTHEORY ,Block error ,symbols.namesake ,Block Error Rate ,Noise ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Electrical and Electronic Engineering ,Algorithm ,Decoding methods ,Communication channel ,Computer Science::Information Theory - Abstract
To facilitate applications in IoT, 5G, and beyond, there is an engineering need to enable high-rate, low-latency communications. Errors in physical channels typically arrive in clumps, but most decoders are designed assuming that channels are memoryless. As a result, communication networks rely on interleaving over tens of thousands of bits so that channel conditions match decoder assumptions. Even for short high rate codes, awaiting sufficient data to interleave at the sender and de-interleave at the receiver is a significant source of unwanted latency. Using existing decoders with non-interleaved channels causes a degradation in block error rate performance owing to mismatch between the decoder's channel model and true channel behaviour. Through further development of the recently proposed Guessing Random Additive Noise Decoding (GRAND) algorithm, which we call GRAND-MO for GRAND Markov Order, here we establish that by abandoning interleaving and embracing bursty noise, low-latency, short-code, high-rate communication is possible with block error rates that outperform their interleaved counterparts by a substantial margin. Moreover, while most decoders are twinned to a specific code-book structure, GRAND-MO can decode any code. Using this property, we establish that certain well-known structured codes are ill-suited for use in bursty channels, but Random Linear Codes (RLCs) are robust to correlated noise. This work suggests that the use of RLCs with GRAND-MO is a good candidate for applications requiring high throughput with low latency., Comment: 6 pages
- Published
- 2020
- Full Text
- View/download PDF
35. 5G NR CA-Polar Maximum Likelihood Decoding by GRAND
- Author
-
Amit Solomon, Ken R. Duffy, Kishori M. Konwar, and Muriel Medard
- Subjects
FOS: Computer and information sciences ,Computer science ,Maximum likelihood ,Computer Science - Information Theory ,Information Theory (cs.IT) ,E.4 ,94A05 ,020206 networking & telecommunications ,02 engineering and technology ,Control channel ,0202 electrical engineering, electronic engineering, information engineering ,Polar ,Algorithm ,5G ,Decoding methods ,Computer Science::Information Theory - Abstract
© 2020 IEEE. CA-Polar codes have been selected for all control channel communications in 5G NR, but accurate, computationally feasible decoders are still subject to development. Here we report the performance of a recently proposed class of optimally precise Maximum Likelihood (ML) decoders, GRAND, that can be used with any block-code. As published theoretical results indicate that GRAND is computationally efficient for short- length, high-rate codes and 5G CA-Polar codes are in that class, here we consider GRAND's utility for decoding them. Simulation results indicate that decoding of 5G CA-Polar codes by GRAND, and a simple soft detection variant, is a practical possibility.
- Published
- 2020
36. A large-scale validation of NOCIt’s a posteriori probability of the number of contributors and its integration into forensic interpretation pipelines
- Author
-
Lauren E. Alfonse, Slim Karkar, Catherine M. Grgicak, Ken R. Duffy, Desmond S. Lun, and Xia Yearwood-Garcia
- Subjects
0301 basic medicine ,Forensic Genetics ,Likelihood Functions ,Models, Statistical ,Nuisance variable ,Computer science ,Noise (signal processing) ,SIGNAL (programming language) ,Probabilistic logic ,Inference ,Statistical model ,DNA ,computer.software_genre ,DNA Fingerprinting ,Pathology and Forensic Medicine ,Electropherogram ,03 medical and health sciences ,030104 developmental biology ,0302 clinical medicine ,Probabilistic method ,Genetics ,Humans ,030216 legal & forensic medicine ,Data mining ,computer - Abstract
Forensic DNA signal is notoriously challenging to interpret and requires the implementation of computational tools that support its interpretation. While data from high-copy, low-contributor samples result in electropherogram signal that is readily interpreted by probabilistic methods, electropherogram signal from forensic stains is often garnered from low-copy, high-contributor-number samples and is frequently obfuscated by allele sharing, allele drop-out, stutter and noise. Since forensic DNA profiles are too complicated to quantitatively assess by manual methods, continuous, probabilistic frameworks that draw inferences on the Number of Contributors (NOC) and compute the Likelihood Ratio (LR) given the prosecution’s and defense’s hypotheses have been developed. In the current paper, we validate a new version of the NOCIt inference platform that determines an A Posteriori Probability (APP) distribution of the number of contributors given an electropherogram. NOCIt is a continuous inference system that incorporates models of peak height (including degradation and differential degradation), forward and reverse stutter, noise and allelic drop-out while taking into account allele frequencies in a reference population. We established the algorithm’s performance by conducting tests on samples that were representative of types often encountered in practice. In total, we tested NOCIt’s performance on 815 degraded, UV-damaged, inhibited, differentially degraded, or uncompromised DNA mixture samples containing up to 5 contributors. We found that the model makes accurate, repeatable and reliable inferences about the NOCs and significantly outperformed methods that rely on signal filtering. By leveraging recent theoretical results of Slooten and Caliebe (FSI:G, 2018) that, under suitable assumptions, establish the NOC can be treated as a nuisance variable, we demonstrated that when NOCIt’s APP is used in conjunction with a downstream likelihood ratio (LR) inference system that employs the same probabilistic model, a full evaluation across multiple contributor numbers is rendered. This work, therefore, illustrates the power of modern probabilistic systems to report holistic and interpretable weights-of-evidence to the trier-of-fact without assigning a specified number of contributors or filtering signal.
- Published
- 2020
37. A large-scale dataset of single and mixed-source short tandem repeat profiles to inform human identification strategies: PROVEDIt
- Author
-
Lauren E. Alfonse, Ken R. Duffy, Desmond S. Lun, Catherine M. Grgicak, and Amanda D. Garrett
- Subjects
Forensic Genetics ,0301 basic medicine ,Genotype ,STR multiplex system ,Datasets as Topic ,Computational biology ,Biology ,Polymerase Chain Reaction ,Pathology and Forensic Medicine ,03 medical and health sciences ,Forensic dna ,0302 clinical medicine ,Genetics ,Humans ,Multiplex ,030216 legal & forensic medicine ,Alleles ,social sciences ,Human identity ,DNA Fingerprinting ,Identification (information) ,030104 developmental biology ,Microsatellite ,Scale (map) ,DNA Damage ,Microsatellite Repeats - Abstract
DNA-based human identity testing is conducted by comparison of PCR-amplified polymorphic Short Tandem Repeat (STR) motifs from a known source with the STR profiles obtained from uncertain sources. Samples such as those found at crime scenes often result in signal that is a composite of incomplete STR profiles from an unknown number of unknown contributors, making interpretation an arduous task. To facilitate advancement in STR interpretation challenges we provide over 25,000 multiplex STR profiles produced from one to five known individuals at target levels ranging from one to 160 copies of DNA. The data, generated under 144 laboratory conditions, are classified by total copy number and contributor proportions. For the 70% of samples that were synthetically compromised, we report the level of DNA damage using quantitative and end-point PCR. In addition, we characterize the complexity of the signal by exploring the number of detected alleles in each profile.
- Published
- 2018
38. Exploring STR signal in the single- and multicopy number regimes: Deductions from an in silico model of the entire DNA laboratory process
- Author
-
Neil Gurram, Kelsey C. Peters, Catherine M. Grgicak, Genevieve Wellner, and Ken R. Duffy
- Subjects
0301 basic medicine ,DNA Copy Number Variations ,Serial dilution ,Stochastic modelling ,In silico ,Clinical Biochemistry ,Biology ,Polymerase Chain Reaction ,Sensitivity and Specificity ,Biochemistry ,Analytical Chemistry ,03 medical and health sciences ,Forensic dna ,0302 clinical medicine ,Humans ,Computer Simulation ,030216 legal & forensic medicine ,Sample dilution ,Alleles ,Genetics ,Electrophoresis, Capillary ,DNA ,DNA Fingerprinting ,Electropherogram ,030104 developmental biology ,STR analysis ,Microsatellite ,Biological system ,Microsatellite Repeats - Abstract
Short tandem repeat (STR) profiling from DNA samples has long been the bedrock of human identification. The laboratory process is composed of multiple procedures that include quantification, sample dilution, PCR, electrophoresis, and fragment analysis. The end product is a short tandem repeat electropherogram comprised of signal from allele, artifacts, and instrument noise. In order to optimize or alter laboratory protocols, a large number of validation samples must be created at significant expense. As a tool to support that process and to enable the exploration of complex scenarios without costly sample creation, a mechanistic stochastic model that incorporates each of the aforementioned processing features is described herein. The model allows rapid in silico simulation of electropherograms from multicontributor samples and enables detailed investigations of involved scenarios. An implementation of the model that is parameterized by extensive laboratory data is publically available. To illustrate its utility, the model was employed in order to evaluate the effects of sample dilutions, injection time, and cycle number on peak height, and the nature of stutter ratios at low template. We verify the model's findings by comparison with experimentally generated data.
- Published
- 2017
39. AB0144 STRATIFICATION OF PATIENTS WITH PRIMARY SJÖGREN’S SYNDROME BY MEASURING B-CELL HEALTH
- Author
-
A. Farchione, Vanessa L. Bryant, Philip D. Hodgkin, Franciscus Kroese, Gwenny M Verstappen, S. Downie-Doyle, Hendrika Bootsma, H. Cheon, Maureen Rischmueller, Ken R. Duffy, and Jessica C Tempany
- Subjects
0301 basic medicine ,Oncology ,medicine.medical_specialty ,Immunology ,Naive B cell ,B-cell receptor ,Disease ,Immunoglobulin D ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,Rheumatology ,Internal medicine ,medicine ,Immunology and Allergy ,B cell ,030203 arthritis & rheumatology ,CD40 ,biology ,business.industry ,030104 developmental biology ,medicine.anatomical_structure ,Immunoglobulin class switching ,biology.protein ,Rituximab ,business ,medicine.drug - Abstract
Background:Primary Sjögren’s syndrome (pSS) is a heterogeneous immune disorder with broad clinical phenotypes that can arise from a large number of genetic, hormonal, and environmental causes. B-cell hyperactivity is considered to be a pathogenic hallmark of pSS. However, whether B-cell hyperactivity in pSS patients is a result of polygenic, B cell-intrinsic factors, extrinsic factors, or both, is unclear. Despite controversies about the efficacy of rituximab, new B-cell targeting therapies are under investigation with promising early results. However, for such therapies to be successful, the etiology of B-cell hyperactivity in pSS needs to be clarified at the individual patient level.Objectives:To measure naïve B-cell function in pSS patients and healthy donors using quantitative immunology.Methods:We have developed standardised, quantitative functional assays of B-cell responses that measure division, death, differentiation and isotype switching, to reveal the innate programming of B cells in response to T-independent and dependent stimuli. This novel pipeline to measure B-cell health was developed to reveal the sum total of polygenic defects and underlying B-cell dysfunction at an individual level. For the current study, 25 pSS patients, fulfilling 2016 ACR-EULAR criteria, and 15 age-and gender-matched healthy donors were recruited. Standardized quantitative assays were used to directly measure B cell division, death and differentiation in response to T cell-independent (anti-Ig + CpG) and T-cell dependent (CD40L + IL-21) stimuli. Naïve B cells (IgD+CD27-) were sorted from peripheral blood mononuclear cells and were labeled with Cell Trace Violet at day 0 to track cell division until day 6. B cell differentiation was measured at day 5.Results:Application of our standardized assays, and accompanying parametric models, allowed us to study B cell-intrinsic defects in pSS patients to a range of stimuli. Strikingly, we demonstrated a hyperresponse of naïve B cells to combined B cell receptor (BCR) and Toll-like receptor (TLR)-9 stimulation in pSS patients. This hyperresponse was revealed by an increased mean division number (MDN) at day 5 in pSS patients compared with healthy donors (p=0.021). A higher MDN in pSS patients was observed at the cohort level and was likely attributed to an increased division burst (division destiny) time. The MDN upon BCR/TLR-9 stimulation correlated with serum IgG levels (rs=0.52; p=0.011). No difference in MDN of naïve B cells after T cell-dependent stimulation was observed between pSS patients and healthy donors. B cell differentiation capacity (e.g., plasmablast formation and isotype switching) after T cell-dependent stimulation was also assessed. At the cohort level, no difference in differentiation capacity between groups was observed, although some pSS patients showed higher plasmablast frequencies than healthy donors.Conclusion:Here, we demonstrate defects in B-cell responses both at the cohort level, as well as individual signatures of defective responses. Personalized profiles of B cell health in pSS patients reveal a group of hyperresponsive patients, specifically to combined BCR/TLR stimulation. These patients may benefit most from B-cell targeted therapies. Future studies will address whether profiles of B cell health might serve additional roles, such as prediction of disease trajectories, and thus accelerate early intervention and access to precision therapies.Disclosure of Interests:Gwenny M. Verstappen: None declared, Jessica Catherine Tempany: None declared, HoChan Cheon: None declared, Anthony Farchione: None declared, Sarah Downie-Doyle: None declared, Maureen Rischmueller Consultant of: Abbvie, Bristol-Meyer-Squibb, Celgene, Glaxo Smith Kline, Hospira, Janssen Cilag, MSD, Novartis, Pfizer, Roche, Sanofi, UCB, Ken R. Duffy: None declared, Frans G.M. Kroese Grant/research support from: Unrestricted grant from Bristol-Myers Squibb, Consultant of: Consultant for Bristol-Myers Squibb, Speakers bureau: Speaker for Bristol-Myers Squibb, Roche and Janssen-Cilag, Hendrika Bootsma Grant/research support from: Unrestricted grants from Bristol-Myers Squibb and Roche, Consultant of: Consultant for Bristol-Myers Squibb, Roche, Novartis, Medimmune, Union Chimique Belge, Speakers bureau: Speaker for Bristol-Myers Squibb and Novartis., Philip D. Hodgkin Grant/research support from: Medimmune, Vanessa L. Bryant Grant/research support from: CSL
- Published
- 2020
40. Guessing random additive noise decoding with soft detection symbol reliability information - SGRAND
- Author
-
Muriel Medard and Ken R. Duffy
- Subjects
Channel code ,Property (programming) ,Computer science ,Reliability (computer networking) ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,Extension (predicate logic) ,Symbol (chemistry) ,Noise ,0202 electrical engineering, electronic engineering, information engineering ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Communication channel - Abstract
We recently introduced a noise-centric algorithm, Guessing Random Additive Noise Decoding (GRAND), that identifies a Maximum Likelihood (ML) decoding for arbitrary code-books. GRAND has the unusual property that its complexity decreases as code-book rate increases. Here we provide an extension to GRAND, soft-GRAND (SGRAND), that incorporates soft detection symbol reliability information and identifies a ML decoding in that context. In particular, we assume symbols received from the channel are declared to be error free or to have been potentially subject to additive noise. SGRAND inherits desirable properties of GRAND, including being capacity achieving when used with random code-books, and having a complexity that reduces as the code-rate increases.
- Published
- 2019
41. Simultaneous tracking of division and differentiation from individual hematopoietic stem and progenitor cells reveals within-family homogeneity despite population heterogeneity
- Author
-
Tamar Tak, Ken R. Duffy, Noémie Paillon, Leïla Perié, Giulio Prevedello, and Gaël Simon
- Subjects
0303 health sciences ,education.field_of_study ,Cell type ,Offspring ,Population ,Biology ,Phenotype ,03 medical and health sciences ,Haematopoiesis ,0302 clinical medicine ,Evolutionary biology ,030220 oncology & carcinogenesis ,Identification (biology) ,Progenitor cell ,education ,030304 developmental biology ,Ancestor - Abstract
The advent of high throughput single cell methods such as scRNA-seq has uncovered substantial heterogeneity in the pool of hematopoietic stem and progenitor cells (HSPCs). A significant issue is how to reconcile those findings with the standard model of hematopoietic development, and a fundamental question is how much instruction is inherited by offspring from their ancestors. To address this, we further developed a high-throughput method that enables simultaneously determination of common ancestor, generation, and differentiation status of a large collection of single cells. Data from it revealed that while there is substantial population-level heterogeneity, cells that derived from a common ancestor were highly concordant in their division progression and share similar differentiation outcomes, revealing significant familial effects on both division and differentiation. Although each family diversifies to some extent, the overall collection of cell types observed in a population is largely composed of homogeneous families from heterogeneous ancestors. Heterogeneity between families could be explained, in part, by differences in ancestral expression of cell-surface markers that are used for phenotypic HSPC identification: CD48, SCA-1, c-kit and Flt3. These data call for a revision of the fundamental model of haematopoiesis from a single tree to an ensemble of trees from distinct ancestors where common ancestor effect must be considered. As HSPCs are cultured in the clinic before bone marrow transplantation, our results suggest that the broad range of engraftment and proliferation capacities of HSPCs could be consequences of the heterogeneity in their engrafted families, and altered culture conditions might reduce heterogeneity between families, possibly improving transplantation outcomes.
- Published
- 2019
42. Guessing random additive noise decoding with symbol reliability information (SRGRAND)
- Author
-
Wei An, Muriel Medard, and Ken R. Duffy
- Subjects
FOS: Computer and information sciences ,Computer science ,Information Theory (cs.IT) ,Reliability (computer networking) ,Quantization (signal processing) ,Computer Science - Information Theory ,E.4 ,Data_CODINGANDINFORMATIONTHEORY ,Symbol (chemistry) ,Majority logic decoding ,McEliece cryptosystem ,Code (cryptography) ,Noise (video) ,Electrical and Electronic Engineering ,Algorithm ,Decoding methods ,Computer Science::Information Theory - Abstract
The design and implementation of error correcting codes has long been informed by two fundamental results: Shannon's 1948 capacity theorem, which established that long codes use noisy channels most efficiently; and Berlekamp, McEliece, and Van Tilborg's 1978 theorem on the NP-hardness of decoding linear codes. These results shifted focus away from creating code-independent decoders, but recent low-latency communication applications necessitate relatively short codes, providing motivation to reconsider the development of universal decoders. We introduce a scheme for employing binarized symbol soft information within Guessing Random Additive Noise Decoding, a universal hard detection decoder. We incorporate codebook-independent quantization of soft information to indicate demodulated symbols to be reliable or unreliable. We introduce two decoding algorithms: one identifies a conditional Maximum Likelihood (ML) decoding; the other either reports a conditional ML decoding or an error. For random codebooks, we present error exponents and asymptotic complexity, and show benefits over hard detection. As empirical illustrations, we compare performance with majority logic decoding of Reed-Muller codes, with Berlekamp-Massey decoding of Bose-Chaudhuri-Hocquenghem codes, with CA-SCL decoding of CA-Polar codes, and establish the performance of Random Linear Codes, which require a universal decoder and offer a broader palette of code sizes and rates than traditional codes., This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
- Published
- 2019
43. Capacity-Achieving guessing random additive noise decoding
- Author
-
Jiange Li, Ken R. Duffy, and Muriel Medard
- Subjects
FOS: Computer and information sciences ,Scheme (programming language) ,Computer science ,Computer Science - Information Theory ,Computation ,E.4 ,Code word ,Markov process ,02 engineering and technology ,Library and Information Sciences ,symbols.namesake ,0202 electrical engineering, electronic engineering, information engineering ,computer.programming_language ,Computer Science::Information Theory ,Channel code ,Information Theory (cs.IT) ,Codebook ,Approximation algorithm ,020206 networking & telecommunications ,Computer Science Applications ,Noise ,94A24 ,symbols ,computer ,Algorithm ,Decoding methods ,Information Systems ,Communication channel - Abstract
We introduce a new algorithm for realizing maximum likelihood (ML) decoding for arbitrary codebooks in discrete channels with or without memory, in which the receiver rank-orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in a member of the codebook is the ML decoding. We name this algorithm GRAND for Guessing Random Additive Noise Decoding. We establish that GRAND is capacity-Achieving when used with random codebooks. For rates below capacity, we identify error exponents, and for rates beyond capacity, we identify success exponents. We determine the scheme's complexity in terms of the number of computations that the receiver performs. For rates beyond capacity, this reveals thresholds for the number of guesses by which, if a member of the codebook is identified, that it is likely to be the transmitted code word. We introduce an approximate ML decoding scheme where the receiver abandons the search after a fixed number of queries, an approach we dub GRANDAB, for GRAND with ABandonment. While not an ML decoder, we establish that the algorithm GRANDAB is also capacity-Achieving for an appropriate choice of abandonment threshold, and characterize its complexity, error, and success exponents. Worked examples are presented for Markovian noise that indicate these decoding schemes substantially outperform the brute force decoding approach., National Science Foundation (U.S.) (Grant 6932716)
- Published
- 2019
44. Privacy with Estimation Guarantees
- Author
-
Mayank Varia, Ken R. Duffy, Lisa Vo, Flavio P. Calmon, Muriel Medard, and Hao Wang
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,Information privacy ,Computer Science - Machine Learning ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,020206 networking & telecommunications ,02 engineering and technology ,Mutual information ,Function (mathematics) ,Library and Information Sciences ,Machine Learning (cs.LG) ,Computer Science Applications ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science::Databases ,Information Systems ,Computer Science::Cryptography and Security - Abstract
© 1963-2012 IEEE. We study the central problem in data privacy: how to share data with an analyst while providing both privacy and utility guarantees to the user that owns the data. In this setting, we present an estimation-theoretic analysis of the privacy-utility trade-off (PUT). Here, an analyst is allowed to reconstruct (in a mean-squared error sense) certain functions of the data (utility), while other private functions should not be reconstructed with distortion below a certain threshold (privacy). We demonstrate how chi-square information captures the fundamental PUT in this case and provide bounds for the best PUT. We propose a convex program to compute privacy-assuring mappings when the functions to be disclosed and hidden are known a priori and the data distribution is known. We derive lower bounds on the minimum mean-squared error of estimating a target function from the disclosed data and evaluate the robustness of our approach when an empirical distribution is used to compute the privacy-assuring mappings instead of the true data distribution. We illustrate the proposed approach through two numerical experiments.
- Published
- 2019
45. Malaria-induced remodelling of the bone marrow microenvironment mediates loss of haematopoietic stem cell function
- Author
-
Kira Eilers, Alexander Lipien, Robert E. Sinden, Berthold Göttgens, Nicola Ruivo, Heather Ang, Andrew M. Blagborough, Ken R. Duffy, Cristina Lo Celso, Myriam L. R. Haltalli, Nicola K. Wilson, Samuel Watcham, and Maria L. Vainieri
- Subjects
0303 health sciences ,Cell ,Biology ,3. Good health ,Transplantation ,03 medical and health sciences ,Haematopoiesis ,0302 clinical medicine ,medicine.anatomical_structure ,Interferon ,medicine ,Cancer research ,Bone marrow ,Stem cell ,Progenitor cell ,Intravital microscopy ,030304 developmental biology ,030215 immunology ,medicine.drug - Abstract
Severe infections are a major source of stress on haematopoiesis, where consequences for haematopoietic stem cells (HSCs) have only recently started to emerge. HSC function critically depends on the integrity of complex bone marrow niches, which have been shown to be altered during ageing and haematopoietic malignancies. Whether the bone marrow (BM) microenvironment plays a role in mediating the effects of infection on HSCs remains an open question. Here we used an murine model of malaria coupled with intravital microscopy, single cell RNA-Seq, mathematical modelling and transplantation assays to obtain a quantitative understanding of the proliferation dynamics of haematopoietic stem and progenitor cells (HSPCs) duringPlasmodiuminfection. We uncovered that duringPlasmodiuminfection the HSC compartment turns over significantly faster than in steady state, and that a global interferon response and loss of functional HSCs are linked to alterations in BM endothelium function and osteoblasts number. Interventions that targeted osteoblasts uncoupled HSC proliferation and function, thus opening up new avenues for therapeutic interventions that may improve the health of survivors of severe infections.
- Published
- 2018
46. Stochastically Timed Competition Between Division and Differentiation Fates Regulates the Transition From B Lymphoblast to Plasma Cell
- Author
-
Jie H. S. Zhou, Philip D. Hodgkin, Ken R. Duffy, and John F. Markham
- Subjects
lcsh:Immunologic diseases. Allergy ,0301 basic medicine ,Cell division ,Cellular differentiation ,Naive B cell ,Plasma Cells ,Immunology ,Priming (immunology) ,Mice, Transgenic ,Plasma cell ,Biology ,lineage priming ,Lymphocyte Activation ,03 medical and health sciences ,Mice ,Live cell imaging ,Biological Clocks ,Asymmetric cell division ,medicine ,Animals ,Immunology and Allergy ,Cell Lineage ,CD40 Antigens ,anti-CD40 stimulation titration ,Cells, Cultured ,B cells ,B-Lymphocytes ,Stochastic Processes ,fate regulation ,Precursor Cells, B-Lymphoid ,competing stochastic timers ,Germinal center ,Cell Differentiation ,Cell biology ,Mice, Inbred C57BL ,030104 developmental biology ,medicine.anatomical_structure ,Female ,Immunization ,Positive Regulatory Domain I-Binding Factor 1 ,lcsh:RC581-607 ,Cell Division - Abstract
In response to external stimuli, naïve B cells proliferate and take on a range of fates important for immunity. How their fate is determined is a topic of much recent research, with candidates including asymmetric cell division, lineage priming, stochastic assignment, and microenvironment instruction. Here we manipulate the generation of plasmablasts from B lymphocytes in vitro by varying CD40 stimulation strength to determine its influence on potential sources of fate control. Using long-term live cell imaging, we directly measure times to differentiate, divide, and die of hundreds of pairs of sibling cells. These data reveal that while the allocation of fates is significantly altered by signal strength, the proportion of siblings identified with asymmetric fates is unchanged. In contrast, we find that plasmablast generation is enhanced by slowing times to divide, which is consistent with a hypothesis of competing timed stochastic fate outcomes. We conclude that this mechanistically simple source of alternative fate regulation is important, and that useful quantitative models of signal integration can be developed based on its principles.
- Published
- 2018
- Full Text
- View/download PDF
47. Guessing noise, not code-words
- Author
-
Jiange Li, Ken R. Duffy, and Muriel Medard
- Subjects
Rank (linear algebra) ,Computer science ,Maximum likelihood ,Code word ,Markov process ,020206 networking & telecommunications ,02 engineering and technology ,symbols.namesake ,Noise ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Bit error rate ,Algorithm ,Decoding methods ,Block (data storage) ,Communication channel ,Computer Science::Information Theory - Abstract
© 2018 IEEE. We introduce a new algorithm for Maximum Likelihood (ML) decoding for channels with memory. The algorithm is based on the principle that the receiver rank orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in an element of the code-book is the ML decoding. In contrast to traditional approaches, this novel scheme has the desirable property that it becomes more efficient as the code-book rate increases. We establish that the algorithm is capacity achieving for randomly selected code-books. When the code-book rate is less than capacity, we identify asymptotic error exponents as the block length becomes large. When the code-book rate is beyond capacity, we identify asymptotic success exponents. We determine properties of the complexity of the scheme in terms of the number of computations the receiver must perform per block symbol. Worked examples are presented for binary memoryless and Markovian noise. These demonstrate that block-lengths that offer a good complexity-rate tradeoff are typically smaller than the reciprocal of the bit error rate.
- Published
- 2018
48. Four model variants within a continuous forensic DNA mixture interpretation framework: Effects on evidential inference and reporting
- Author
-
Ken R. Duffy, Muhammad O. Qureshi, Catherine M. Grgicak, Harish Swaminathan, and Desmond S. Lun
- Subjects
0301 basic medicine ,Forensic Genetics ,Heredity ,Computer science ,Normal Distribution ,lcsh:Medicine ,Inference ,Social Sciences ,Artificial Gene Amplification and Extension ,computer.software_genre ,Polymerase Chain Reaction ,0302 clinical medicine ,Medicine and Health Sciences ,Psychology ,lcsh:Science ,Reliability (statistics) ,Statistic ,Verbal Communication ,Likelihood Functions ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Clinical Laboratory Sciences ,Genetic Mapping ,Categorization ,Physical Sciences ,Algorithms ,Research Article ,Genotyping ,Sample (statistics) ,Variant Genotypes ,Machine learning ,Research and Analysis Methods ,Normal distribution ,03 medical and health sciences ,Diagnostic Medicine ,Genetics ,Humans ,030216 legal & forensic medicine ,Molecular Biology Techniques ,Molecular Biology ,Forensics ,Behavior ,Models, Statistical ,business.industry ,Verbal Behavior ,lcsh:R ,Probabilistic logic ,Biology and Life Sciences ,Statistical model ,Probability Theory ,Probability Distribution ,DNA Fingerprinting ,United States ,030104 developmental biology ,Genetic Loci ,lcsh:Q ,Law and Legal Sciences ,Artificial intelligence ,business ,computer ,Software ,Mathematics - Abstract
Continuous mixture interpretation methods that employ probabilistic genotyping to compute the Likelihood Ratio (LR) utilize more information than threshold-based systems. The continuous interpretation schemes described in the literature, however, do not all use the same underlying probabilistic model and standards outlining which probabilistic models may or may not be implemented into casework do not exist; thus, it is the individual forensic laboratory or expert that decides which model and corresponding software program to implement. For countries, such as the United States, with an adversarial legal system, one can envision a scenario where two probabilistic models are used to present the weight of evidence, and two LRs are presented by two experts. Conversely, if no independent review of the evidence is requested, one expert using one model may present one LR as there is no standard or guideline requiring the uncertainty in the LR estimate be presented. The choice of model determines the underlying probability calculation, and changes to it can result in non-negligible differences in the reported LR or corresponding verbal categorization presented to the trier-of-fact. In this paper, we study the impact of model differences on the LR and on the corresponding verbal expression computed using four variants of a continuous mixture interpretation method. The four models were tested five times each on 101, 1-, 2- and 3-person experimental samples with known contributors. For each sample, LRs were computed using the known contributor as the person of interest. In all four models, intra-model variability increased with an increase in the number of contributors and with a decrease in the contributor's template mass. Inter-model variability in the associated verbal expression of the LR was observed in 32 of the 195 LRs used for comparison. Moreover, in 11 of these profiles there was a change from LR > 1 to LR < 1. These results indicate that modifications to existing continuous models do have the potential to significantly impact the final statistic, justifying the continuation of broad-based, large-scale, independent studies to quantify the limits of reliability and variability of existing forensically relevant systems.
- Published
- 2018
49. Multiplexed Division Tracking Dyes for Proliferation-Based Clonal Lineage Tracing
- Author
-
Susanne Heinzel, Julia M. Marchingo, Miles B Horton, Ken R. Duffy, Jie H. S. Zhou, Giulio Prevedello, and Philip D. Hodgkin
- Subjects
0301 basic medicine ,Cell type ,Lineage (genetic) ,Immunology ,Computational biology ,Biology ,CD8-Positive T-Lymphocytes ,Multiplexing ,03 medical and health sciences ,Mice ,0302 clinical medicine ,Fate mapping ,Lineage tracing ,Immunology and Allergy ,Animals ,Cell Lineage ,Coloring Agents ,Selection (genetic algorithm) ,Cell Proliferation ,Division (mathematics) ,Phenotype ,Mice, Inbred C57BL ,030104 developmental biology ,Cell Tracking ,Biomarkers ,Cell Division ,030215 immunology - Abstract
The generation of cellular heterogeneity is an essential feature of immune responses. Understanding the heritability and asymmetry of phenotypic changes throughout this process requires determination of clonal-level contributions to fate selection. Evaluating intraclonal and interclonal heterogeneity and the influence of distinct fate determinants in large numbers of cell lineages, however, is usually laborious, requiring familial tracing and fate mapping. In this study, we introduce a novel, accessible, high-throughput method for measuring familial fate changes with accompanying statistical tools for testing hypotheses. The method combines multiplexing of division tracking dyes with detection of phenotypic markers to reveal clonal lineage properties. We illustrate the method by studying in vitro–activated mouse CD8+ T cell cultures, reporting division and phenotypic changes at the level of families. This approach has broad utility as it is flexible and adaptable to many cell types and to modifications of in vitro, and potentially in vivo, fate monitoring systems.
- Published
- 2018
50. Guesswork Subject to a Total Entropy Budget
- Author
-
Ali Makhdoumi, Arman Rezaee, Muriel Medard, Ken R. Duffy, and Ahmad Beirami
- Subjects
FOS: Computer and information sciences ,Password ,TheoryofComputation_MISCELLANEOUS ,Computer Science - Cryptography and Security ,Theoretical computer science ,Uniform distribution (continuous) ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,String (computer science) ,020206 networking & telecommunications ,02 engineering and technology ,Adversary ,Identity (music) ,Subject (grammar) ,0202 electrical engineering, electronic engineering, information engineering ,Cryptography and Security (cs.CR) ,Word (computer architecture) ,Abstraction (linguistics) ,Computer Science::Cryptography and Security - Abstract
We consider an abstraction of computational security in password protected systems where a user draws a secret string of given length with i.i.d. characters from a finite alphabet, and an adversary would like to identify the secret string by querying, or guessing, the identity of the string. The concept of a "total entropy budget" on the chosen word by the user is natural, otherwise the chosen password would have arbitrary length and complexity. One intuitively expects that a password chosen from the uniform distribution is more secure. This is not the case, however, if we are considering only the average guesswork of the adversary when the user is subject to a total entropy budget. The optimality of the uniform distribution for the user's secret string holds when we have also a budget on the guessing adversary. We suppose that the user is subject to a "total entropy budget" for choosing the secret string, whereas the computational capability of the adversary is determined by his "total guesswork budget." We study the regime where the adversary's chances are exponentially small in guessing the secret string chosen subject to a total entropy budget. We introduce a certain notion of uniformity and show that a more uniform source will provide better protection against the adversary in terms of his chances of success in guessing the secret string. In contrast, the average number of queries that it takes the adversary to identify the secret string is smaller for the more uniform secret string subject to the same total entropy budget., In Proc. of Allerton 2017 (19 pages, 4 figures)
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.