Wednesday, December 9, 2020

Variability with polygenic risk scores


    With increased popularity in companies such as 23andme and the ability to assess a person's DNA for a "polygenic risk scores", preventative health decisions are becoming more and more popular. The polygenic risk score is basically an overall assessment of an individuals DNA using a small piece of DNA from saliva and generate an estimation based on large-scale genomic studies. While a large majority of the consumers can be accurately assessed for risk, other users may find themselves in the wrong category. This has lead to a study to be conducted, in which coronary heart disease, atrial fibrillation, type 2 diabetes, Alzheimer's disease, glaucoma, and breast cancer, were used in order to calculate risk scores. The data had shown that regardless of what control is used for the risk factor, due to the scale of population-level genetics, there is always introduced variability. In order to reduce this randomness the researchers had to run the tool multiple times to eliminate the random elements and create an average to assist in a computational process that strives for accuracy. 
    Based on the information provided by the article, it has created a sense of yes and no debate with regards to these polygenic tests. Essentially the information is saying that the more they sequence the snippet of DNA the more the results are accurate. The question is how many times are these companies actually doing these sequences as well as how radical are these overall results? Of course this research is evidentially useful to those who are prone to these risks, but if the inaccuracies are substantial in certain groups is the information really that useful on a global scale? It is all interesting regardless, just more curious in terms of information that is actually gathered by these tests versus what is generally assumed by the test itself based on these population level pools.

No comments:

Post a Comment