Contrary to what the librarian says to you when you are part of loud conversation, SSSH! time here is referring to the Self-Selecting Safe Harbor (SSSH) tool invented by Zach Stevenson and crew in the Patrick Phillips Lab at the U of Oregon.
There is a back story here. We are proud of our people at InVivo Biosystems (IVB). Some, like me, have been hanging around with IVB for quite a long time. Others, like Zach, come and go, but still leave their mark.
Zachary joined us when we were pre-merger Knudra Transgencis. He was fairly new to genome engineering, but Zach was a quick study. He became a master of CRISPR-based transgenesis which he leveraged in his next career move – helping him get into graduate school at the University of Oregon. Zach and the team at Knudra had tasked themselves with the aim of finding better tools for detecting genome integration. We needed efficient systems that help identify only the animals that have experienced genomic integration. Better yet, the tool would be most effective if only the desired genome integrated strains could survive after exposure to a toxic compound. During Zach’s time at Kundra, the idea floated around a bit, but it never got the legs of experiment implementation to demonstrate its feasibility.
Once in graduate school Zach teamed up with Megan J. Moerdyk-Schauwecker and Brennen Jamison in the Phillips Lab to get the real world evidence that demonstrated the idea can work. Their team chose the hygR gene, to determine if a split-hygR gene could be harvested as tool to identify integration in a safe harbor locus (Stevenson et al. G3. 2020)
The principle is simple – chop the hygR gene into two parts. Put the long part into the genome of C. elegans and put the other part in your transgene plasmid. Zach did this at the MosSCI ttTi5605 safe harbor locus. This transgenic target strain contains most of the hygR gene but is missing a critical segment needed for creation of a functional hygromycin B phosphotransferase. Next, their transgene of interest was made in a plasmid that also contains the missing hygR part. The trick now is to have the same sgRNA site in the plasmid and in the edited safe harbor site. The interaction of the plasmid and the genome when injected with CRISPR reagents renders a region of overlap of about 700 bp on each end of the insertion cargo that allows homology repair to do its magic. When designed right, you only need one sgRNA to initiate the DNA cuts that trigger efficient homologous recombination repair. This technique works great in C. elegans transgenesis. Add hygromycin B to the growth plates and only the genomic-integrated animals can survive. Whether it can work in embryo injections with other organisms remains to determined.
At IVB we are building on this to use our fast and easy CRISPR-sdm technique to place a small the small split-hygR fragment at any locus of the genome. This will allow us to drive large constructs 5 to 10 KB (and perhaps even 20-100 KB) into any native locus.
Bottom line: Getting some SSSH! time with this split-hygR technique can calm the frustration of the aggravated C. elegans researcher.
How do you model splice variations in your animal model systems?
Splice variation is an important consideration in genomic analysis of patient variations and it is often overlooked (PMID: 29680930). It is estimate that 15%–60% of human disease mutations are due to splicing defect ( PMID: 29304370). So, with close to 40% of disease causing variation likely being attributable to splicing defects, it becomes an important variation to be able to model in functional studies to determine if the variant is pathogenic.
But let’s first look at the process of splicing and what is known.
This complex process is managed in a complex way. Certain cell types will favor one form of splicing, while other tissues will select other forms. This is the natural splice isoforms variation that gives us more than just the number of genes in the genome to control biological output. In fact, this is part of the explanation for why a C. elegans nematode or a zebrafish, with roughly the same number of genes as a human, have such different levels of output complexity. Currently the number of functional isoforms in humans may be an order of magnitude more than what occurs in the nematode. Furthermore, the ways splice variation can take place gets bewildering quick.
How is splicing observed in the patient?
Layer on top of this the aberrant spice variations that can cause disease, and we have a tough interpretation problem. Thankfully RNAseq is providing a huge amount of diagnostic discovery for splice variation. We can compare the splicing patterns in healthy populations with a patient suspected of a genetic disease and visualize where the splicing is going wrong (PMID:28424332).
Modeling Splice Variation in Animal Models
To reduce the complexity of biology and yet bring more comparative biology relevance, often we can take a human cDNA sequence and use it to rescue the function of the animal’s version of the gene. To do this, we use CRISPR to remove the animal’s version of the gene (a gene “knock out”). Next we take a human cDNA sequence optimized for expression in the animal and either replace the deleted locus or express the sequence in trans (at a safe harbor site using the promoter that is either endogenous to the removed gene, or a promoter well established for appropriate tissue expression). In C. elegans, we have been pleasantly surprised that more than half the time for orthologs of at least 30% identity, we can get significant rescue of the loss of function seen in the knock out. In zebrafish, we have started applying the same techniques of gene replacement. The result, a set of gene humanized animals where the conservation of biology means we are looking at functional outputs that are highly similar.
Missense variations are conceptually easy to model. An amino acid change that is pathogenic (ex R235Q in STXBP1) is is installed with CRISPR using a simple donor homology that instructs the cell’s HDR to alter the DNA coding for Q (glutamine) into a code for R (Arginine) in our “wildtype” humanized locus.
But how do we mimic a splice variation?
It is actually quite simple. We create a donor homology that makes any splice form of interest. We are not interested in the mechanism to answer “if” it occurs – RNAseq already answers that. We are after functional consequence. We want to answer “does a particular splice form in question have a measurable defect compared to the normal splicing.”
Let’s look at one of the patient examples in detail.
In the red we have 4 patients with a collagen gene splice defect suspected of involvement in their diagnosis for Ullrich Congenital Muscular Dystrophy. Since all persons have two copies of the COL6A1 gene, we can see that one copy is splicing normally while the other copy is defective and its splicing brings in a pseudoexon. “The resulting inclusion of 24 amino acids occurs within the N-terminal triple-helical collagenous G-X-Y repeat region of the COL6A1 gene, the disruption of which has been well established to cause dominant-negative pathogenicity in a variety of collagen disorders” (PMID: 28424332)
Creating Knock-in for Animal Model of Disease
In regards to disease modeling of splice variations, we use a cDNA rescue approach. The variation seen in the patient is made as a plasmid coding for expression of a modified cDNA. This cDNA contains the human gene code that is suspected of creating an aberrant spice variation. Using CRISPR techniques, the segment coding for human DNA is inserted into the genome, typically at the orthologous locus of the animal.
Modeling in the C. elegans nematode.
To model the COL6A1, we would first seek to understand the phenotype from loss of function of the animal’s ortholog version of the human gene. For COL6A1, this is the C16E9.1 gene in C. elegans. This gene is not well studied in the nematode, but does show high expression in the alternative life state of dauer.
The first step is to make a gene knock-out to remove the C16E9.1 gene from the worm genome. Next, a series of functional assays are run to determine if a functional defect can be detected for the C16E9.1 knock-out as a loss-of-function allele. For essential genes, the ultimate manifestation of loss of function is lethality as a homozygote. In other genes critical genes will often manifest with functional defect after a battery of functional screens are performed. Once a defect in activity is observed, human cDNA can be introduced to see if rescue of function can be obtained. When rescue is obtained with human cDNA, we know we are looking at conserved biology for gene function between the animal and humans.
Once we have rescue of function, the fun begins. We can use CRISPR to put in the exact content that RNAseq indicates is occurring in the human gene. The pseudoexon seen in one copy of the patient’s chromosome pair can be made in the animal. Often if the patient variant is problematic from a loss of function perspective where haploinsufficiency drives disease. When a defect is made as a homozygote in the animal, the effect is usually a severe phentoype (often lethal) and is similar to what is seen in the gene knockout. Yet in the specific case from above with the pseudoexon in COL6A1, we are dealing with a dominant negative effect, so the defective splice not only disrupts this protein, it also causes the good copy to fail to function properly. Animals homozygous for the pseudoexon defect may actually have a less strong defect phenotype than when the animals are made as heterozygotes. Creation of the patient’s heterozygous condition is achieved by crossing the splice-variant-containing humanized animal model into the wild type humanized animal model and examining the cross progeny for defects in activity.
Modeling in the Zebrafish.
We can do similar modeling in zebrafish using the Tol2 system. In zebrafish there is one ortholog for the COL6A1. The col6a1 zebrafish gene has 55% sequence identity and 70% sequence similarity to humans. Like the work in the C. elegans nematode, we can remove the native gene and look for functional consequences. CRISPR techniques are used to create a knockout by inserting a stop codon early in the gene. If designed right, this results in loss of all expression for col6a1. Next we can measure the functional consequence of the gene knock-out by first trying to see if the animal can be made homozygous. If it is not lethal, the animal can be screened by a battery of assays to determine if a functional defect exists. Finding either lethality as homozygote, or observing a functional defect, allows testing for capacity of human COL6A1 cDNA to rescue function. A gene insertion approach using Tol2 is used to bring in the cDNA with an appropriate tissue-specific promoter. Rescue of function in specific tissues, for instance with the use of the 195 bp unc45b promoter for skeletal muscle expression (PMID: 27295336), will help elucidate the important roles of COL6A1 in dystrophy diseases.
The pseudoexon insertion defect seen in COL6A1 is a dominant negative variation. So, when a single copy of this gene is brought into the animal, it will have the capacity to suppress the activity of the unmodified copy of the gene. By inserting the cDNA with the patient into a safe harbor site we create a pseudo heterozygote. The dosage of the cDNA comes from two chromosomal positions while the wildtype locus provides expression of two copies of the normal gene. If the cDNA is dominant negative on its effect on the zebrafish gene, then defect of gene function will manifest.
Recap of Splice defect Modeling in Animal Models
In summary, the ability to model splice variants is done from a cDNA level. A modified cDNA rescue construct containing the human gene of interest is designed in three forms:
Positive Control (blue): The humanized wildtype cDNA provides a reference of the normal gene seen in healthy individuals.
Negative Control (red): A knockout deletion of the animal’s gene provides reference for full loss of function of the gene.
Test (yellow): A variant is tested for its functional activity. A range of activities is expected and depends on the pathogenic variant’s mechanistic role in disease pathology. It may be a dominant negative that creates a pathology worse than the loss of function allele because it binds to and causes bad behavior from the remaining good copy of the gene. Alternatively, the variant may cause loss of function. This will be either recessive and manifest as a homozygous, or it will be dominant and manifest by haploinsufficiency as a heterozygote. Finally, the variant of interest may cause a gain of function, which is typically manifest only the heterozygote.
You know when that hunch seems to get reinforced over and over again, then your mind starts speculating it as a fact.
!Danger! Will Robinson… it’s time for a serious fact check.
My hunch was that the amino acid arginine (Aka: “Arg” or “R) seems to be showing frequent association with pathogenicity. It started with the observation that many of the established pathogenic variants in the coding sequence of STXBP1 seem to involve a preference for arginine. Extracting from ClinVar for missense that are pathogenic and likely pathogenic gives the following table:
Indeed arginine (R) is disproportionately represented. Assuming all amino acids as equals, then there should be 4.3 for each amino acid. Disproportionally low are things that make sense. Like methionine (M), only one codon (ATG) instructs for insertion of this amino acid in a sequence. Similarly tryptophan (W) also has only one codon (TGG). These two amino acids should be represented below the average. A little bit oddly, we have similar low levels from lysine (K), phenylalanine (F) and glutamate (Q) who each have two codons. If codon dosage was key to variant proportioning, then these should have been seen at least 2x more than M and W, so perhaps something more than codon dosage mediates amino acid choice in creating pathogenic variations.
Arginine has 6 codons which still could drive its outsized proportion in the graph. Yet Serine (S) and Leucine (L) also have 6 codons. But respectively they are at 7 and 3 for being involved in pathogenicity. Only mighty arginine accounts for 13 of the 43 pathogenic variants in STXBP1 (30%). Tempering my enthusiasm is the observation that for 3 amino acid positions R292, R406 and R451, we have multiple changes being called pathogenic. Yet no other amino acid in the STXBP1 pathogenics has this changling capacity, so why is it that arginine is at high proportion in the assigned pathogenics – perhaps it is just a consequence of a biased investigator focus specific to STXBP1 and they fixed their gaze onto the repeating de novo clinical variants at positions 292, 406 and 451.
Is arginine involved in fragility elsewhere in the genome?
To normalize for possible investigator bias and find a method that can be applied to other portions of the genome, I took advantage of the Ensembl database to list and rank a gene’s codon sequence variants by bioinformatics analysis. Ranking on CADD was used to list protein coding sequence variations by their severity.
Ensembl allows us to identify which variations are theoretically likely to be disruptive of protein function. The choice to rank by CADD (stand for Combined Annotation-Dependent Depletion) allows us to use a sophisticated algorithm that avoids investigator bias because it intentionally avoids using “known” pathogenicity databases when it creates it ranking. A key test is to see if CADD can independently observe the pathogenicity known to exist in STXBP1. To construct the test, we compare the top scoring CADD variants with the lowest scoring CADD variants.
With CADD, we get an independent call for possible pathogenicity that still picks up what you might expect. Nearly half the calls in the Top-30 CADD pull up known pathogenicity and no benign calls are found. In the Bottom-30 CADD we get one known benign call and no pathogenics.
Healthy population data also is consistent. STXBP1 is autosomal dominant. That means you only need one of your two chromosomal copies to be defective and disease will occur. Selection pressure has been very tight on autosomal dominant genes. Variants in healthy population cannot occur at higher than the known frequency of the disease in the population. Published frequency in STXBP1 for causing early-infantile epileptic encephalopathy is 1/90,000. The largest healthy population database is in GnomAD. At 141,456 individuals, and the fact that STXBP1 needs to distribute across at least 43 pathogenic alleles, the likeliness of even one pathogenic variant being in healthy populations is pretty close to zero. Some of our Top-30 CADD have 1x or more frequency in healthy populations. Most of these are unassigned. For these unassigned that are seen at 1x or more, the disease frequency argument strongly implicates that they are benign variants.
So the CADD is not perfect, the top scoring hits are a mix of known pathogenic and probably benign. But the bottom scoring CADD seems to be more efficient at pulling out benign. In the Bottom-30 CADD, only one variant, I271V, is labeled Likely Benign by ClinVar, yet nearly everyone of these alleles (27 of 30) is seen in healthy populations, so they too are probably benign.
At this point in the analysis, we can pinpoint an anomaly. Y264C is labeled in ClinVar as a Likely Pathogenic. But from the population frequency argument, this assignment is highly unlikely. Y264C has been observed to occur in healthy populations. So a a bare minimum, it should be downgraded to a VUS, but probably be called a Likely Benign for causing early-infantile epileptic encephalopathy.
Finding Arginine-associated Fragility Throughout the Genome
This top-30 / bottom-30 approach was applied to a large set of genes. As a form of internal control, we add isoleucine (I) in the screen. With less conviction, I have felt this amino acid was associating with benign variants. If true, it should show an enrichment in the Bottom 30 CADD scores. So in my gene set experiment, I measured 4 bins. 2 bins for how many arginine and isoleucine in the Top 30 and 2 bins for how many arginine and isoleucine in the Bottom 30.
30% of top 30 CADD scoring variants contain arginine???!!!
An assumption of even distribution of amino acids, combined with an even more absurd assumption of an average 3.05 codons per amino acid, gives us 4.3% as average amino acid fraction per each 30 (dashed line). Arginine is 7.2x more than this average number. Yet, we need to account for the fact arginine uses about 2x more than the average codon usage. A a result Arginine bias in the Top 30 is about 3.5x more than expected. For isoleucine, the enrichment in the bottom 30 appears to be about 2x more than expected.
Test dataset – 30% arginine in Top-30 CADD prevails
The noisiest data in the Top-30 CADD appears to be the Arginine data. A cumulative trending plot was used to see how many genes were need before the trend to 30% becomes apparent. After assessing 7 genes the trend starts to stabilize. A new set of 7 genes were chosen. This time the genes were chosen from the Undiagnosed Disease Network (UDN). The UDN recently listed 54 genes as in desperate need for animal modeling to provide gene function studies. A sub-selection of these were identified as having good sequence similarity to genes in the animal models which we hold dear to our heart and expertise (zebrafish and C. elegans). The Top-30 / Bottom-30 CADD selection was applied to these genes and plotted for Arg and Leu enrichment. 30% prevails for arginine – it occurs at least 3.5x more than expected for being the top CADD variants as hypersensitive to substitution.
Most notable anomaly is arginine. 6 codons are use by arginine, but the observed frequency is low at 4.2%. To illustrate how low, they calculated the expected frequency for each amino acid biasing only for the GC richness of vertebrate genomes.
The expected frequency for arginine is quite high at about 10.5% due to its GC richness in its codons. Yet the actual observed frequency is quite low at about 4%. Based on this observed frequency, we bounce back – we now assess that we are observing arginine in the top 30 at 8x more than expected. No explanation for the anomaly and it just became more pronounced!
Taking a different approach, we can ask what percentage of ALL known pathogenic and likely pathogenic variants in a gene involve arginine substitution. 7 genes analyzed and we get the same 30% for arginine. Yet the calculations are that it should be below 4%. 8x more than expected prevails.
Are your arginines special too?
This analysis has uncovered a unique phenomenon. It appear everyone’s arginines are special. Exactly why arginine has this special status is not entirely clear. It is highly likely arginine has been strongly selected against its random incorporation during evolution. As a result of this strong negative selection (much more than what is happening for all other amino acids), arginine’s frequency in all proteins is much lower than predicted. The observed pathogenic sensitivity may be a read out of this hyperselectivity of evolution. Basically, arginine’s use in any given protein is very particular. A possible driver for this is arginine’s amazing capacity to bring high order to neighboring side chains in most protein structures. When it is gone, chaos reigns. When it is introduced where it should not be, chaos still reigns.
How prevalent are Variants of Uncertain Significance?
ClinVar database for variant interpretation was analyzed for its levels of ACMG-AMP assessments. With help from the data dumps from ClinVar Miner, the yearly distribution of assessments was plotted. Since 2016 and shortly after the ACMG-AMP guidelines came out in 2015, the number of assessments assigned to the VUS category has grown rapidly. These are the variants that clinical genetics researchers have examined, but cannot decide if they are pathogenic or not.
How big will the VUS problem get?
To estimate how large the VUS problem will become, we must first understand how big is the human genome. Controversy abounds, but current estimate are there are 21,306 protein coding genes and 21,856 non-coding genes. To be conservative, and for simplicity sake, let us use 20,000 genes as the number. The next question is how many of these are disease associated. When we look to ClinVar the number of “genes with variants specific to one protein-coding gene” we get 7221 genes. More conservatively, we can look to ClinVar’s “gene_condition_source_id” which list 4242 genes as being associated with a diagnostic condition. This lower number is reinforced by OMIM in which the “Total number of genes with phenotype-causing mutation” is 4162 genes. These list have been growing rather steady at 5% per year, so in a few years the likely number of gene-disease associations will probably approach 5000 genes, or roughly 1/4 the human genome.
VUS problem may eventually approach 7 Million variants
A recent attempt to preload the human genome with pathogenicity assessment potential has been made. InterVar database applied ACMG-AMP guidelines to ~80,000,000 amino acid positions in the genome to provide a database for easier variant interpretation. Since at least 20% of these positions are likely to be in genes with known disease association, there are roughly 16,000,000 variants that will eventually occur in patient-derived genome sequencing. If the current trend of 44% VUS translates across that number, then there will be close to 7,000,000 variants in need of functional studies to resolve their pathogenicity.
A novel animal model systems for rapid variant interpretation
The team at Nemametrix just produced a wonderful set of preliminary data that we showed at the recent American Society of Human Genetics. It shows it is possible to use a training set of known benign and pathogenic alleles in a gene to “teach” a ML algorithm to determine if pathogenicity is present in a VUS. When applied to the STXBP1 gene, a set of 5 benign and 5 pathogenic was sufficient to train for segregation in an LDA plot and the Y75C was assessed as pathogenic.
Once this type of system is trained with a set of known pathogenic and benign variants, the assessment of pathogenicity can be achieved in a soon as 10 days from start of a VUS transgenesis project.
In a prior blog post, the presence of dominant alleles in my genome gave me pause when trying to interpret the data from sequencing my DNA. Dominant alleles can be the cause disease when only one pathogenic variation occurs in only one gene copy of the chromosome pair. Contrast this to a recessive allele where you must get a defect in both chromosome copies of the gene to cause disease. In the recessive condition, if you only have one defective copy, you can expect to remain healthy, but you are a carrier of a disease allele. With the lack of immediate consequence to being a carrier status, many more individuals should be walking around with variations that are recessive towards disease. In fact, the CFTR gene variation (p.Arg117His) for Cystic Fibrosis that was highlighted for me in my Veritas Genomic sequencing report is quite common. It occurs globally at 1 per 2,500 persons, and that increases close 1 per 1,000 for northern europeans, which is a dominant portion of my ancestral genomic composition. In contrast, the CACNA1S variant (p.Arg419His) that most concerns me in my genome, has a prevalence of 1 in 25,000. Thats of low enough to be Rare Disease in Europe, but still probably way to high for disease manifestation rates.
Rare domination in CACNA1S needs to be rare enough to cause Hypokalemic Periodic Paralysis.
Dominant disease causality with the Arg419His variation in CACNA1S is unlikely because it is too frequent for the 1 per 100,000 population frequency for the disease of Hypokalemic Periodic Paralysis. Yet there are two variations known to be causative in CACNA1S, Arg528His and Arg1239His. Arg528His occurs at close to 1/100,000, while Arg1239His has yet to be detected in healthy populations. Clearly the Arg11239His is low enough population frequency to be causative for Hypokalemic Periodic Paralysis. Yet for my Arg419His, the frequency is too high for it to be causative. A variant effect that is Autosomal Dominant (AD) is extremely unlikely for my lone Arg419His allele.
If dominant alleles need to be rare in the population, how frequent is dominant status for variants of a disease?
The frequency of Autosomal Dominance (AD) for any given disease gene appears to be quite high. It is estimated that there are about 7000 Rare Diseases. If we assume the On-line Mendelian Inheritance in Man (OMIM) already represents most of these genes, then rare disease variants will map to the 4346 gene entries in OMIM with published allelic variations. Next, I listed these variations in blocks of 100 to reveals the number of genes for which they are known to exclusively Autosomal Dominant (AD) or Autosomal Recessive (AR), or some kind of hybrid.
When one runs down the inheritance pattern and tabulates them per gene, the first 100 variants have about twice as many genes in the AR category when compared to the AD category.
Running thru the another 400 more variants in the 100 variant blocks shows the trend continues – Dominance of a genetic conditions occurs for about 1/3rd of the disease genome.
Axiom for the individual : “I am not very dominating, but there are lots out there who are.”
So at the individual basis, it appears the AD status of pathogenic or likely pathogenic variants in your genome is very rare. Yet, at a population level, a large proportion of Rare Disease is caused by Autosomal Dominant variation. Rare disease calculate to occur at about 1 per 15 persons. So, for about 1 in 50 (150 million persons), their disease casing variation is likely to be Autosomal Dominant.
In today genomic medicine era, it remains challenging to understand the functional consequence of a gene variant’s contribution towards disease. Guilt by association is one of the criteria upon which a new variant is judged. We can look at healthy populations data and compare it to established Pathogenic and Likely Pathogenic variants. This helps us understand if a new variant may have a propensity to cause disease. The thought is that if a new variant is occurring at a region previously established as causing pathogenicity, then the new variant may be pathogenic too (ACMG guideline: PM1 “moderate” assessment criteria).
Is my variant guilty of pathogenicity because of its proximity to a pathogenicity hotspot?
In the image above, we see that there are hotspots (red) and coldspots (blue) for pathogenicity in STXBP1. The hotspot values were generated from the known Pathogenic and Likely-Pathogenic listed in ClinVar. The coldspot values (highMAF) come from variants seen in healthy populations. In yellow we have Variants of Uncertain Significance (VUS). Intensity of the peak is a measure of both how many times different variations are seen at an amino acid position and if their nearest neighbors have the same assignment. This plot suggest there are spots in STXBP1 that can tolerate sequence diversity (blue bars) and spots where a hit leads to pathogenic behavior (red bars). Further, the VUS are landing in both red bar and blue bar regions. Perhaps we can consider VUS to be either pathogenic or benign by this association? Yet, there is a critical assumption that leads to a question: How legitimate is it that every variant in healthy populations (“highMAF”) is ASSUMED to be benign?
2,504 healthy population genomes – Calculating the rare variants in each person
To dig into the validity (or invalidity) of this assumption, we can look to a large population study and ask how many times do we see variation and what are their types. The 1000 Genomes Project Consortium shows an average person has about 4,500,000 million variations. Of these, about 100,000 are somewhat rare because they are seen in less than 1/200 persons (<0.005 MAF). The even more rare “singletons” of the study occur at a frequency of 1 per 2504 persons. This restriction gives us about 10,000 more rare variations to think about per each person. Yet, to get even more rare and be able to ask the question how many variants per person meet the 1 per 200,000 USA definition for Rare Disease frequency, the study size would need to be 100x bigger. Nevertheless, we have interesting data reported in the 1000 Genomes study on healthy population variants that are also seen as pathogenic in Human Gene Mutation Database (HGMD) and ClinVar datasets. Filtering the observed path in healthy population as frequency per individual, every person can expect to harbor 20-25 variants of established pathogenicity.
A larger study by Karczewski et al. 2019 is approaching the scale need for assessing Rare Disease. A dataset of 141,456 human genomes (125,748 exomes and 15,708 genomes) was harvested from the wildtype controls used in various disease studies. The exomes observe variation mostly in the coding sequence of a gene, while the genomes record variant information across the gene (coding + upstream/downstream/introns). The result is a deeper measure of the frequency of missense variation that approaches the 1 in 200,000 genomes needed for Rare Disease designation. Currently the National Organization for Rare Disease (NORD) list 1258 disease in their database. STXBP1 cross references to two of these (Dravet and West Syndromes). Both of these syndromes each have a support group, which are two of the 283 total family foundation groups that are listed in the NORD member list.
Yet the situation for Rare Disease is larger. In the NIH’s Genetics and Rare Disease (GARD), there are 6264 unique genetic diseases listed. This suggest there are thousands of genes for which we can expect to have gene variant issues leading to disease. ClinVar currently list 7046 is the number of “Genes with variants specific to one protein-coding gene.” Basically it appears that a third of your 20,000 protein coding genes could take a hit that increases your risk or likeliness of coming down with genetic disease symptoms.
The GARD lists an intriguing statistics that 20-25 Americans are living with Rare Disease. The USA’s current population is 327.2 Million, so roughly 1 in 15 individuals world wide are probably living with rare disease. Assuming monogenic cause, then at least 51 million pathogenic might residing in the human population. Add polygenic burden and the number may be a multiple (100, 150, 200, 250….??) for variants associated with disease currently being experienced today. Guilt by association to hotspots and coldspots might provide some answer, but functional studies are the more definitive proof, and +50 million is a lot of animal models to build!!
What genes are good candidates for alternative animal modeling?
I set out to determine which important disease genes are good candidates for creating animal models in C. elegans. The first step was to turn to a database that has a comprehensive listing of human genes and their disease association. The DisGeNet database has nearly every human gene annotated for its level of disease association (17,549 genes as of June 2019). They provide a curated list that has 8400 genes with Gene-Disease Association (GDA) score of 0.1 or higher. For the top 1000 genes the GDA scores are 0.69 or higher, which indicates they scored high for having a significant disease association. These top 1000 were selected for examination of their ortholog status in C. elegans using the Diopt database. 749 othologies were detected, of which 411 had clear reciprocal nature (back-blast gives starting gene for the ortholog as best hit). The top 100 of these genes for high homology and detectable loss-of-function consequence were selected.
Tabulation of disease-associated genes with properties favorable for C. elegans humanization
The top 100 are tabulated in gene-alphabetical format below. These 100 genes have 8360 variants as known to be as problematic (Path, Likely Path, or VUS).
Use a search tool to quickly find out if your favorite gene occurs below.
(Note: gene knock out for 58% of these genes results in lethality.)
When you get the genomic report, you have a movement of trepidation. What will it say? ….Will it have a reveal that says you should do countermeasures immediately? ….Will it say something that you can do nothing about? The latter condition occurred for me. There were findings that had strong impact on my psyche.
Two things were called out heavy. A cancer risk of melanoma. Good thing my family, first my momma, and then my spouse, have been diligent in their liberal in the application sunscreen to the family. Once I googled and pubmed searched the MC1R(R160W) locus, I found the evidence was less than compelling for a dramatic change of lifestyle. Just keep the sunscreen coming and I will likely be fine.
The carrier result was a little more of a shocker. A good personal friend has a daughter homozygous in this gene. It was discovered in utero and they have been vigilant ever since. Their daughter is now in her teens. Doing exceptionally well and acting like any normal kid – currently enthralled with dance class and other outdoor activities. Preventative medicine done right. So getting tagged with a pathogenic in this gene is giving me mixed feelings. A mix of some worry and yet, almost pride. Even though my good friends don’t share my specific genetic lesion, it still feels very personal and connecting. Furthermore, this is one of the genes where modern genomic medicine is making great progress in understanding and treatment.
Will you too be a carrier of a pathogenic variation?
Carrier status is something all of us should expect. Veritas recently publicly disclosed at the Precision Medicine World Congress that their database has 90% of customer reports as returning with carrier status for at least one pathogenic variant. Recent discussions with Robert Green at Harvard confirm this – he showed me a large dataset that gave the number as 92% of healthy populations as being carriers for known pathogenic variants. You might think that there are a lucky few (10%) who are not carriers, but think again. The average person will have close to 3 million differences from the reference genome and this may be an underestimate. Distribute that unbiased across the genome and we have coding regions with close to 30 thousand variations. Since you have close to 20 thousand genes that means every gene has approximately 1.5 variations in it. Now lots of approximating, and does not factor in selection against bad variations. Yet in that quick calculation, the main message is every gene is likely to have a variation and some genes will have multiple variations. So the original question of how many of these are pathogenic, becomes difficult to approximate. Publications suggest we may have up about 1300 suspect variations hiding in our genome. Yet definitive variants with “known” pathogenicity is likely to be much lower in your genome.
Complicating this is issue is variable penetrance – a pathogenic variant in one family may behave with monogenic behavior in that family. While in another family, that same variation may be acting more polygenic – it needs other gene mutations to have pathos in the patient. It is behaving more like a “risk factor” for disease.
Pathogenic variant frequency in Chris Hopkins’ genome
The vagueness of my carrier status “kills” me, so I wanted to know in more. I contacted a good friend at the Rady Children’s Hospital. Dr. Matthew Bainbridge is a researcher who was a key contributor to the Rady’s renowned speed at using whole genome sequencing for rapid genetic diagnosis. Matthew introduced me to some software tools he has been developing. His company Codified Genomics has developed a variant analysis software that allows exploration of one’s genomic variants. All you need is your BAM or VCF files.
What’s that? …You don’t know what is a BAM file, …or a VCF?!!!
Dont worry, lets decode the jargon. In the clinphen journey to understand my clinical predilections, predispositions, and pathos, I found myself getting immersed into the intricacy of the end-to-end solution in genomic data acquisition and interpretation. What happens when you spit in a tube and put it in the mail? A lot of stuff! I came across an amazing guide to understanding the industry space behind genomic sequencing, the Enlightenbio Report. This help me get a tightly-focused view on the process of understanding one’s DNA.
That first box is what happens after you spit in the tube. The chemicals in the tube react with the cellular material in the spit to help stabilize it and prevent its degradation. This allows one to send the sample at room temp to the lab. On the receiving, the lab initiates a protocol to isolate the DNA that comes from the mouth epidermal cells that slough off into your spit. DNA is manipulated in such a way that it can go onto a microchip slide and set of DNA sequencing chemistry reactions are used to read out the DNA in small segments of sequence. Each of the millions of sequence segment reads is recorded as a fastq file. The fastq read segments are compared and aligned to a reference genome to make a BAM file. The BAM file alignments are processed to detect where sequence variation occurs, which is recorded as a VCF file. VCF files are analyzed by comparison to databases and assessments are made of each variant’s potential for pathogenicity. The assessment data is generally provided as a report to the clinician (or the intrepid genome wanderer such as myself). This report takes the raw data and massages it into a format for easier understanding of what is the baggage of one’s genome.
1604 suspect variations in my genome
Matthew helped me upload my VCF files into the Codified program. Next, he showed me how to wander around sifting the data by various aspect such as allele frequency, dominant and recessive status. known pathogenic genes, etc. The upload to Codified indicates I have exactly 1604 suspect variations occurring at an appreciable fraction of the reads and at positions inside, or in close proximity to, the coding sequence of my genes. These variants are suspect because they may alter protein function or levels of expression for the identified genes. If we just limit the dataset to changes that alter amino acid composition (non-synomous), we get 875 gene variations. Add back potential spicing issues, indels, and aberrant start and stop codon issues, we are back up to 1440 variants as genetic differences that are highly suspect for altering gene expression and function.
316 MIM variant hits in my genome!
What happens if we limit the entire 1604 to only those genes with recognized involvement in disease. We get 316 variants occurring in genes as recognized by the Mendelian-inheritance-in-Man (MIM) database for being disease-associated genes. When we restrict this set to coding issues only, we get 281 suspect variants.
I get clean bill of health when I get a physical exam, so can I disregard these 281 suspect variants?
One easy step is to filter for carrier only status. 111 variants are clearly identifiable as only autosomal recessive (AR). I would require two hits in each of the paired chromosome copies to have these be of concern. Since, no paired hits were detected, we can dismiss these genes as in need of my immediate concern. As a result, we are now only concerned about hits in genes with known autosomal dominant (AD) issues. These are the genes where only one bad hit is needed to render them pathogenic. Bottomline, 170 gene variants in my genome are worthy of further contemplation.
How frequent is frequent in my 170?
There is good rational to only be concerned about a hit in a gene with AD propensity, if it is rare in the population. The thinking is that if a variation is deleterious by itself (AD), it cannot be tolerated at a high level in the human population. Contrast this to the recessive (AR) variants (also called “alleles” when talking about frequency). My known AR pathogenic variant in the CFTR gene is in the human population at 0.0014 minor allele frequency (MAF). This high allelic frequency is tolerated in the human population because you need two hits in each gene copy in order to have a syndromic issue. Autosomal dominant alleles must have much lower frequency. If we cull the 170 for variations that occur at 0.00001 MAF or lower, we get 53 gene-codon-altering variations to be concerned about. Examining the list manually gave me 17 genes for which I hold varying degrees of concern, of which I list the top 10:
None are in the ACMG59
In a prior blog post, I described the list of genes for which can be included in a clinical report as a secondary findings. These are allowed in a report because these 59 genes have known actions that can be taken to mitigate their negative health effect. None of my genes of concern are in this group, so the immediate actionability is absent for my findings about the baggage in my genome. In fact, the genes I am listing as genes I am concerned about, but they actually do not significantly bother me that much. I am still alive and in good health. If I had pathogenic variations in these genes the negative health consequence, they should have manifest many years ago. Nevertheless, the three for which I hold highest concern are CACNA1S, LGI1, and RTN2.
The variation in CACNA1S (p.R419H) may sound like a benign, and it is a conservative change in amino acid composition, but it occurs in a highly-conserved region. It is present as an Arginine (“R”) in humans, mice, fish, flies and worms. This invariant use of R implies protein function will be compromised when the position is substituted with a histidine. The LGI1 (p.A253T) variant is also a conserved amino acid change, but it is in a less conserved region. This lack of complete conservation indicates this position might tolerate an Arginine to Threonine change. The RTN2 is complex variant. It does two significantly alarming changes. It makes a dramatic Leucine to Arginine change in the 4th exon up from the end of the protein. It also occurs immediately adjacent to splice junction acceptor site. This alteration of splicing region suggest it could lead to improper splicing in a highly conserved region of the protein and thus create a defective protein.
It is likely that all three of these genes yield a protein of messed up function. But what is not clear is the type of mess-up. Are they leading to loss-of-function (LOF) activity? Or do they lead to dominant gain-of-function (GOF)? These variations are most likely in the LOF category. Otherwise, I would almost certainly be dealing with the disease symptoms that the GOF variant’s manifest. Yet this is just supposition – a hypothesis. We don’t yet have solid evidence for what is going on.
How could we get final answer for if these variations are these pathogenic or not?
To get precision answers, we could model all of these variants in C elegans, For the CACNA1S and the RTN2, their high conservation from human to worm would allow direct modeling in the worm’s homologous position of the worm’s native gene (“Native locus”).
Our prior work with full gene humanization indicates more congruent results occur if we first swap in a human gene for the native gene locus (“Humanized Locus”) and then install variant. The use of a humanized locus allows modeling of any variant, whether it is highly conserved or not across many species. So far, in our studies all known pathogenic variants behave with deviant behavior, but only when put into humanized systems. Contrast this to insertion in native locus – some known pathogenic alleles did not create detectable deviance of behavior!
For the 3 genes to which I am concerned, all are of favorable size that the human sequence can be easily optimized and installed for expression from the worm’s native locus (“Humanized” animal). If we can observe that the human gene can rescues loss of function, we will know we are off-to-the-races and can study variant biology in a gene-humanized system. The humanized animals will be precision proxies serving as clinical avatars of the patient condition.
CACNA1S is a drugable target. The creation of a humanized system expressing CACNA1S as gene replacement of egl-19 gene would generate a platform for drug discovery. The patient variants might be responsive to calcium channel blockers, such as benzothiazepines, phenylalkylamines and 1,4-dihydropyridines. The end result, a highly-personalized medicine approaches would be achieved that finds drug treatments specific to the patient’s genetic pre-conditions.
There is a significant pressure to increase diagnostic yield and it has its consequences. BRCA testing is probably the most developed ecosystem for genetic tests but controversy remains about what medical procedures are best recommended for the patient. High profile cases like the decision of Angelina Jolie and to undergo a bilateral mastectomy and the implication of a “Positive” Turner Syndrome test have helped bring the controversies to more widespread attention..
The heart of the controversy is how often is a correct diagnosis leading to a form of unnecessary care that is crowding out necessary care, or worse. Physician and Surgeon Atul Gawande wrote New Yorker piece titled:
“Overkill – An avalanche of unnecessary medical care is harming patients physically and financially. What can we do about it?”
This article nicely explores the problem of unproductive or unnecessary procedures. In regards to genetic testing, we need to be mindful of all the downstream repercussions of a positive (or negative) test result.
Forms of Risk in Breast Cancer Testing
The decision to have a mastectomy is challenging decision. The involvement of BRCA1 and BRCA2 in breast cancer is clear, yet what to do about it is still controversial (Domchek 2018). From 2006 to 2014, a retrospective study was conducted and identified 780 women at 11 cancer centers who underwent BRCA testing after breast cancer was detected (Rosenberg 2016). 86% of those testing positive elected to have the bilateral mastectomy procedure. But perhaps even more striking, 51% who tested negative also went on to have bilateral mastectomy. A question arises:
Does the election to have full mastectomy by a large fraction of women testing either positive or negative for BRCA1 and BRCA2 pathogenic variants indicate this form of genetic testing has low value to treatment care?
In the general population, the risk of death from surgical procedure is small but real at about 0.01%. So it is prudent to keep that in mind before undergoing the knife. Are there more minimally invasive procedures available? A rather old study (Kurian 2014) suggest it has been known for a while that double mastectomy is no better that the less invasive breast-conserving surgery with radiation for impact on patient mortality. These authors went on to describe:
“In a time of increasing concern about overtreatment, the risk-benefit ratio of bilateral mastectomy warrants careful consideration and raises the larger question of how physicians and society should respond to a patient’s preference for a morbid, costly intervention of dubious effectiveness”
The Need of Piece of Mind
In light of the evidence, what are the psychological factors that drive the choice to have bilateral Mastectomy? For those testing positive as carriers of pathogenic BRCA mutations, the choice is backed up by evidence that reoccurrence risk drops significantly, but for the noncarriers, it appears the impact of having a breast cancer diagnosis is a sufficient driver (Hamilton 2017). Within the physician-patient relationship there is a need to better communicate how to avoid unnecessary procedures and yet find ways to meet the psycho-social need of the patient.
Although ClinVar is a useful resource for seeing data distributions and trends, groups need to be cautious with the details. Julie Eggington from the Center for Genomic Interpretation states “I would warn that rates derived from what is being reported in classification databases are likely very different than what is really going on in testing labs and academic labs. People rarely report boring stuff – I think calculated pathogenic rates derived from classification databases are too high in almost every context.” Julie further postulates that the issue of false positives is larger than people realize. The implication is that about 30% of the variants in ClinVar designated as pathogenic may in fact not be pathogenic. Within a gene, some variants are being more over-interpreted than others. Groups may be relaying data that is fraught with the inaccuracy of a high false positive rate.
Also unsettling is the Variants of Uncertain Significance (the “VUS” problem) are frequently not reported to the physician at time of genetic testing. Recent studies in hereditary cancer have found that 8.7% of VUS have been reclassified to Likely Pathogenic status while only 0.7% of pathogenic have been changed to non-pathogenic status (Mersch 2018). This reclassification leaves us with 21% Path, 21% benign and 58% VUS in hereditary cancer. This closely resembles the overall distribution as outlined in an earlier blog post that relies on ClinVar data (34P:26B:40V). Keep in mind from the prior paragraph, the level of pathogenic variants may actually be much lower than what is reported in the databases sources. This has been leading to a follow-on problem, as variants get reclassified, there is frequently a big disconnect in getting that information back out to the patient.
Consumer reports suggestions:
What are some of this things we can do as consumers of genetic testing? A good consumer reports article makes 5 suggestions one should consider when getting a genetic test done and to contemplate what will be the procedure (surgery, drugs, or no-therapeutic-approach-is-known) if a pathogenic finding is a result.
1) Do I really need this test or procedure?
2) What are the risks and side effects?
3) Are there simpler, safer options?
4. What happens if I don’t do anything?
5. How much does it cost, and will my insurance pay for it?
Uncertainty in Uncertain Times
We are embarking down the new frontier of precision medicine. Our genomes will hold a big key to better understanding of our health and lifespan. But, because one-gene / one-disease hypothesis is the exception and not the rule, we have a long way to go in getting predictive and actionable as we obtain more knowledge of the molecular pathogenicity of the variation in our genomes. The journey to link genotype to phenotype will be long and arduous, and possibly quite epic in its implication to the health management approach we take as a species.
Domchek SM,. Risk-Reducing Mastectomy in BRCA1 and BRCA2 Mutation Carriers: A Complex Discussion. JAMA. 2018 Dec 6. doi: 10.1001/jama.2018.18942.
Rosenberg SM, Ruddy KJ Tamimi RM Gelber S Schapira L Come S Borges VF Larsen B, Garber JE, Partridge AH,. BRCA1 and BRCA2 Mutation Testing in Young Women With Breast Cancer. JAMA Oncol. 2016 Jun 1;2(6):730-6. doi: 10.1001/jamaoncol.2015.5941.
Kurian AW, Lichtensztajn DY Keegan TH Nelson DO Clarke CA Gomez SL. Use of and mortality after bilateral mastectomy compared with other surgical treatments for breast cancer in California, 1998-2011. JAMA. 2014 Sep 3;312(9):902-14. doi: 10.1001/jama.2014.10707.
Hamilton JG, Genoff MC Salerno M Amoroso K Boyar SR Sheehan M Fleischut MH Siegel B Arnold AG Salo-Mullen EE Hay JL Offit K Robson ME. Psychosocial factors associated with the uptake of contralateral prophylactic mastectomy among BRCA1/2 mutation noncarriers with newly diagnosed breast cancer. Breast Cancer Res Treat. 2017 Apr;162(2):297-306. doi: 10.1007/s10549-017-4123-x. Epub 2017 Feb 1.
Mersch J, Brown N, Pirzadeh-Miller S, Mundt E Cox HC Brown K Aston M Esterling L Manley S Ross T,. Prevalence of Variant Reclassification Following Hereditary Cancer Genetic Testing. JAMA. 2018 Sep 25;320(12):1266-1274. doi: 10.1001/jama.2018.13152.
Got my report from Vertias for the MyGenome analysis. What is it that is hiding between the words that come out of my mouth that get written down on this blog? Saliva was delivered into a tube, 3 months ago, and finally the data is starting to arrive.
What lays beneath the surface, may not stay beneath the surface.
If you are like me, you may think you are “healthy,” but we know what is highly likely – you will be a carrier for a disease and it’s also likely risk factors for other diseases will be identified in your genome. Note, 9 of 10 persons are carriers for rare disease, as previously addressed in a prior post. You will even have a low chance (~20%) for immediately actionable conditions that you can start to explore now and find mitigating options.
The Ticking Time Bomb
That last one is perhaps the most compelling reason to get your genome done – can you capture an impending time bomb of genetic disease before it has gone off! For pathogenic variants in the ACMG59 “secondary findings” genes, you stand a good chance of being able to diffuse the bomb before it is too late.
For my report, immediately actionable findings were not discovered. I am highly skeptical that we can say I am healthy and “free” of a genetic precondition. It is clear that researchers are only just now scratching the surface of this potential. The rare monogenic drivers of disease are somewhat understood, but the polygenic drivers are way more in their infancy.
What lies beneath might be two variations that, by themselves are not pathogenic, but together they can cause, or highly exasperate, a disease.
Think about the size of the problem from a theoretical aspect. There are roughly 7000 genes thought to be involved in rare disease. Some of the variants in these genes are monogenic and powerful enough by themselves to cause disease. But it is likely there are many more variants in these genes for which their contribution is not pathogenic by themselves and they need another variation somewhere else in the genome to enable manifestation of disease. Taking just the 7000 genes, the diagenic possibilities are 49 million. In fact, the remainder of the genome can be part of the diagenic, so the space may actually be near 400 million. Then what about 3 gene sympaticos – 8 trillion!! Thats a 1000x more than the number of the people on the planet! The only hope we have for predictive systems here is Big Data and AI options to help us gain sufficient understanding.
Heterogeneity and Homogeneity – the Advantage and Bane of Each.
To truly move to greater understanding of our genetic liabilities, we must move from qualitative (yes or no?) assessment to the quantitative (how much?) assessment. Knowing that a gene variant is 50% pathogenic in its potential can help us start to deconvolute the polygenic problem. When two 50% pathogenic variants in the same disease pathway are seen in the same individual, we have will have reached a threshold and the disease condition can manifest. With the amazing amount of heterogeneity in the human genome, analyzing patient derived tissue will be an extremely difficult approach for quantify pathogenic potential of a variant. Instead, it becomes highly desirable to use systems of high homogeneity. A uniform genetic background greatly simplifies the quantitation of disease contribution of a variant. Knowing the genetic background is the same, we can easily say that gene variant A is XX% stronger than gene variant B in regards to a pathogenic propensity, after deploying a range of function tests of deviant behavior for each of the variants.
Proxies of Disease Biology
The use of C. elegans has unique attributes that make it an ideal system for quantifying variant behavior. There is enough similarity of gene function between humans and the worm, that so far, 4 of 4 human gene insertions with observable sequence homology have been capable of rescue function as gene replacement of the ortholog gene in the worm. Of the many favorable features (speed to transgenics, microscopic size, high-throughput amenable, wide range of easily measured phenotypes, etc), the worm is a self fertilizing hermaphrodite. What this means is that when growth conditions are good, the animal clones copies of itself and can go from 1 animal to nearly 30 million near identical animals in just under 10 days. Only when conditions get stressful does the accident of spontaneous nondisjunction of sex chromosomes become more prevalent and males can form. Under these stress conditions, males go from being extremely rare to about 1 per 100 animals. So the worm has evolved to be highly tolerant of homogeneity and only needs to sample heterogeneity a small fraction of the time to maintain health of the species (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1462001).
Classical LOH – the Bane of Self-fertilization
The clonal nature is quite useful for getting large populations of nearly identical animals, but there is a flip side that creates problems. There is a phenomenon in genetics called Loss of Heterozygosity. Commonly applied to explain the evolution of cancer cell populations, the principle applied to population genetics in species is backcrossing will drive heterozygous conditions towards rarity. What this means for a self fertilizing hermaphrodite is that if the individual starts to self-propagate, and has one of their gene’s in a heterozygous conditions (A/B: variants A and B for a given gene),then half the progeny will be homozygous (either A/A or B/B) and the other half will be heterozygous (A/B). In the next generation, the prior homozygous remain homozygous (either A/A or B/B), but the hets generate another 50/50 split of homo and het. After 10 generations the het is nearly nonexistent in the population (<1%) . The population has bifringed to to A/A and B/B strains. If B/B is deleterious to life, then at 10 generations, most of the animals are A/A.
DNA replication is not perfect. As a clonal population expands, random mutations happen that essentially create heterozygous conditions at random genes (A/B scenarios). For the researcher maintaining strains, one of the biggest mistakes they can do is serially propagate the next generation plate by isolation of only 1 individual for the next population expansion. Since each clone progeny will have at least 4 de novo mutations in their genome from their parent, in just a few generations of this extreme selectivity, the population after 10 generations will have quite a few random and possibly pathogenic hits in quite a few genes and the animals of the serially-propagated strain will have drifted significantly in their genetics from the starting strain. Critical here for C. elegans is to occasionally access sexual reproduction to avoid Muller’s Ratchet.
Genetic drift is Unavoidable
To mitigate this, but not eliminate it, good practice is to transfer 10 to 20 animals for next generation of animals being maintained as a population. Even with this technique, fecundity compromised strains can quickly evolve new mutations that eliminate the starting phenotype and grow faster. So, add to a variety of other transgenerational silencing mechanism, the clonal propagation of a strain can lead to auto-selection of suppressors that effectively “silence” an engineered gene phenotype. Thankfully worms can be flash frozen shortly after making a transgenic line, so one can essentially have an endless supply of starting material. Genetic drift driving selection of gene silencing backgrounds can be avoided by going to a fresh thaw. As a result, high levels of homogenous backgrounds can be obtained for comparing the properties between two variants.
Anti-simpatico Creates More Complexity
Lets take the dialog back to the quantitation of pathogenicity in variants of human disease genes. There are almost certainly some variants in the genome that act to suppress a “monogenic” pathogenic variant. We can envision a negative pathogenicity value for these variants. And adding more complexity to this, is the fact that a variant can be pathogenic in one condition and be protective in another condition. The classic example is sickle-cell anemia and malaria. A person who is a carrier for a recessive pathogenic variation is protected from malaria infections. Yet for persons who are homozygous for the V6Q change in hemogobin, they will have a pathogenic condition that leads to quality of life issues and a reduced lifespan (https://www.cdc.gov/malaria/about/biology/#tabs-1-4). So, as Julie Eggington says, pathogenicity assessment must be made in a disease-specific context. As a result, calculating all of any one individual’s genetic liabilities is an exceedingly complex problem.