Rather than "identify" true elements, I just want to report how many of them are likely false positives
An answer: there is no (none, zero, zilch) statistical basis for predicting the FDR, based upon your process, regardless of the window length.
Further, I do not believe that there ever could be any correlation between the length of the window in which you randomise, and any meaningful statistic about real-world DNA.
Basis of conclusion: Instinct. My gut feel for requirements for Monti-Carlo simulations to produce statistically valid results; having constructed and run many hundreds of such simulations over the years.
Caveat: Not one of my simulations had anything to do with DNA or genomics; and I know next to nothing about the subject.
You cannot draw a statistically valid conclusion based upon 1 (or even a few) random trials; when shuffling just 50 bytes of your sequences can have 1,267,650,600,228,229,401,496,703,205,376 possible outcomes.
However, if you want a mathematically sound, statistical assessment of your question, then you are going to have to describe much more of the detail of the process; and far more accurate assessments of the ranges of the numbers involved. See below for some of the questions arising.
Warning: what follows may come across as "angry". It isn't. It's expressed this way to make a point.
How do you expect to get assistance, when you ask: a statistics question; of a bunch programmers; and conceal everything in genomics lingo?
What the &**&&^% are:
You say "I DO supply the software with 2 separate libraries of LCVs one for the headers, another for the trailer sequences that are supposed to be 'bona fide' based on independent verification".
That is an almost entirely useless description:
Conclusion: based upon the information you've supplied so far; and my own experience of drawing conclusions based upon random simulations; I see no basis for any meaningful conclusions with regard to false discovery rates.
But:
If you were to describe that 3rd party process in detail: What are the inputs (a genome and 2 libraries; but how big, and other constraints); and what are its outputs. The fact that your graph appears to tail off as the length of the window increases is, of itself, meaningless. It also seems to initially increase. And both could simply be artifacts of the set of randomisations that occurred in this run.
How many runs would be required to draw a conclusion? There is no way to determine that from the information you have provided so far.
In reply to Re^3: Window size for shuffling DNA?
by BrowserUk
in thread Window size for shuffling DNA?
by onlyIDleft
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |