**FlexibleMoDe** contains the functionality of the other three learning tools and allows various additional parameters to be set in order to learn iPMMs in more specialized scenarios.
The underlying latent variable model contains -- in its maximal variant -- for every input sequence a latent variable for (a) motif position, (b) motif type, and (c) strand orientation.

If the content of the "Input data" file starts with '>', it is interpreted as FastA file. Otherwise it is interpreted as plain text, where every line contains a single sequence. 
The input expects upper- and lower case letters of the standard DNA alphabet {A,C,G,T}. If other symbols from the IUPAC code (such as N) are encountered, they are replaced by a random sample from the distribution of {A,C,G,T} in the data set. 
The input sequences are allowed to differ in length.

The "Weights" file must, if specified, contain as many numbers (integer or double) as "Input data" has sequences.
The numbers may be separated either by whitespace or by tabs and they are allowed to be spread over multiple lines.
So files with (a) one line of *N* tab-separated numbers, or (b) *N* lines containing a single number, or (b) two lines of *N/2* whitespace-separated numbers, etc... are all treated equally.
If all given weights equal to 1, the data is unweighted. Otherwise weight *i* is a multiplicative factor to the contribution of sequence *i* within the learning algorithm and the sum of all weights is interpreted as sample size. 

The "Motif width" determines the length of the putative binding sites and must thus not exceed the length of the shortest input sequence.

"Motif order(s)" expects a String of comma-separated integers. The number of integers determines the number of mixture components (of the motif model) and their values determine the corresponding model order.
For example, if this parameters is set to *1,2,0*, a three-component mixture with different maximal model orders in each component is used for the motif. 

If *Update motif type parameters* is true, the occurrence probability of every mixture component is dynamically updated during the iterative search. Otherwise every component is assumed to occur with equal probability.  

The "Flanking order" has the same meaning as in **DeNovoMoDe** and is thus only relevant if the input sequences are longer than "Motif width".

"Both strands" determines whether the motif model(s) should be inferred from both strands. If *false*, only the forward strand is taken into account. 

The default values for "Initial iterations", "Additional iterations" and "Restarts" are relatively small values which are, however, in many cases sufficient for finding a motif (if present in the data).
Increasing the number of restarts is typically the most promising option to increase the probability of finding a hard-to-spot pattern.

"Memoization" determines whether the memoization technique for speeding up structure learning (without affecting the optimal result itself) as described in
should be used. It does not change the obtained result and can yield significant speedups when the model order is large.
However, it can then also require substantially more RAM. Disabled by default.  
For details about the technique see:
R. Eggeling, M. Koivisto, I. Grosse. Dealing with Small Data: On the Generalization of Context Trees. *Proceedings of the 32th International Conference on Machine Learning (ICML)*. JMLR: Workshop and Conference Proceedings volume 37, 2015.

"Pruning" determines whether pruning techniques for speeding up exact structure learning of PCTs should be used. They do not change the obtained result and typically yield significant speedups for highly structured data.
However, in the worst case when data is near to uniform such in the early stages of an iterative search pruning can slightly slow down the algorithm.
Enabled by default. For details about the techniques see:
R. Eggeling, M. Koivisto. Pruning Rules for Learning Parsimonious Context Trees. *Proceedings of the 32th Conference on Uncertainty in Artificial Intelligence (UAI)*. AUAI press, 2016.

If no "Name" is specified, it is set by default to "FlexibleMode".

The tool returns
(i) a logfile containing the scores of all iteration steps in the stochastic search for evaluating whether the parameter values for "Initial iterations", "Additional iterations" and "Restarts" have been sufficient or not.
(ii) all learned motif/components models, with each component containing exactly the same output as returned by **SimpleMoDe**.  

To obtain the exact functionality of the other three tools "Weights" are either not to be specified, or all of them have to equal *1*, otherwise weighted-data versions of the learning algorithms are obtained. 
The parameters "Memoization" and "Pruning" affect only the time complexity of the structure learning and have no influence on the obtained result itself. By default pruning is enabled and memoization is disabled in all tools.
Apart from that, the following holds:
(1) If all sequences in "Input data" are of length "Motif width", if "Motif order(s)" is set to a single number *N*, and "Both strands" is set to *false*, then the tool does the same as **SimpleMoDe** with "Order" set to *N*.
(2) If "Motif order(s)" is set to a single number *N*, and "Both strands" is set to *true*, then the tool does the same as **DeNovoMoDe** with "Order" set to N.
(3) If all sequences in "Input data" are of length "Motif width", if "Motif order(s)" is set to a sequence of *K* integers *N* (separated by commas), if "Both strands" is set to *false*, and if "Update motif parameters" is set to *false* then the tool does the same as **MixtureMoDe** with "Number of mixture components" set to *K* and "Component Order" set to *N*.
All deviating parameter settings lead to scenarios that are not covered by the three less flexible tools.