Algorithm for finding the smallest integer that satisfies several modular congruence conditions?












3












$begingroup$


my first question! I work a lot with numbers (finance) but very much an amateur mathematician - please be gentle. I have the following problem that has come from discussions about cryptography:



Given a set of prime numbers $p_1, p_2,...,p_n$ and sets $S_{p_1},S_{p_2},..., S_{p_n}$ containing a selection of modular residue classes with respect to each prime, find the smallest positive integer $q$ that is congruent to a residue class in each $S_i, i = 1,...,n$.



Example:



$p_1 = 7$, $p_2 = 11$, $p_3 = 13 $



$S_1 = {[0]_7,[1]_7,[6]_7}$



$S_2 = {[1]_{11},[5]_{11},[6]_{11},[10]_{11}}$



$S_3 = {[2]_{13},[5]_{13},[8]_{13},[11]_{13}}$



By the Chinese Remainder Theorem (CRT) there will always be a solution if the $S_i$s are non-empty. A brute force approach is to take all combinations of elements from the $S_i$s, apply the CRT to each one in turn and keep track of the smallest.



For example, if we take the first elements from each of the sets above, we can use CRT to solve the following system of equations for q:
$$
q equiv 0 mod 7 \
q equiv 1 mod 11 \
q equiv 2 mod 13
$$



In this case q = 210.



Repeating this process for all the other combinations, we can establish that the smallest possible q is actually 21 ( $equiv [0]_7 equiv [10]_{11} equiv [8]_{13} $)



Of course, this is fine to do when the number of primes and the size of the $S_i$s is small, but blows up quite quickly if they get larger. I'm interested in establishing whether a better algorithm could do this more efficiently and ultimately what is the complexity of this problem in a computational sense.



Any smart ideas? Literature recommendations or theorems in particular that might point me in the right direction would be most appreciated!





Edit: @Yong Hao Ng gives a good improvement on brute force in his answer below, but I think the problem still blows up subexponentially as the number of primes grows, assuming that the number of residue classes for each prime p is roughly p/2. I'm wondering if that's anyone who can improve on this, or indeed suggest why you can't improve on this!










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Look up the Extended Euclidean algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 15:13










  • $begingroup$
    thanks, I am familiar with the extended euclidean algorithm but the useful application here is not obvious to me.. which probably says more about me than your suggestion! Can you elaborate?
    $endgroup$
    – JMP
    Jan 10 at 16:09












  • $begingroup$
    The Chinese Remainder Theorem section on using the existence construction gives a sketch of the algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 16:18










  • $begingroup$
    thanks. to clarify, I'm not struggling to apply the CRT to the system of equations (which indeed requires extended euclid/modular inverses) for any given combination of residues. My question is whether there is a smarter way to find the minimum solution without applying CRT to all the possible combinations of residues given the restrictions in the problem.
    $endgroup$
    – JMP
    Jan 10 at 16:34


















3












$begingroup$


my first question! I work a lot with numbers (finance) but very much an amateur mathematician - please be gentle. I have the following problem that has come from discussions about cryptography:



Given a set of prime numbers $p_1, p_2,...,p_n$ and sets $S_{p_1},S_{p_2},..., S_{p_n}$ containing a selection of modular residue classes with respect to each prime, find the smallest positive integer $q$ that is congruent to a residue class in each $S_i, i = 1,...,n$.



Example:



$p_1 = 7$, $p_2 = 11$, $p_3 = 13 $



$S_1 = {[0]_7,[1]_7,[6]_7}$



$S_2 = {[1]_{11},[5]_{11},[6]_{11},[10]_{11}}$



$S_3 = {[2]_{13},[5]_{13},[8]_{13},[11]_{13}}$



By the Chinese Remainder Theorem (CRT) there will always be a solution if the $S_i$s are non-empty. A brute force approach is to take all combinations of elements from the $S_i$s, apply the CRT to each one in turn and keep track of the smallest.



For example, if we take the first elements from each of the sets above, we can use CRT to solve the following system of equations for q:
$$
q equiv 0 mod 7 \
q equiv 1 mod 11 \
q equiv 2 mod 13
$$



In this case q = 210.



Repeating this process for all the other combinations, we can establish that the smallest possible q is actually 21 ( $equiv [0]_7 equiv [10]_{11} equiv [8]_{13} $)



Of course, this is fine to do when the number of primes and the size of the $S_i$s is small, but blows up quite quickly if they get larger. I'm interested in establishing whether a better algorithm could do this more efficiently and ultimately what is the complexity of this problem in a computational sense.



Any smart ideas? Literature recommendations or theorems in particular that might point me in the right direction would be most appreciated!





Edit: @Yong Hao Ng gives a good improvement on brute force in his answer below, but I think the problem still blows up subexponentially as the number of primes grows, assuming that the number of residue classes for each prime p is roughly p/2. I'm wondering if that's anyone who can improve on this, or indeed suggest why you can't improve on this!










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Look up the Extended Euclidean algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 15:13










  • $begingroup$
    thanks, I am familiar with the extended euclidean algorithm but the useful application here is not obvious to me.. which probably says more about me than your suggestion! Can you elaborate?
    $endgroup$
    – JMP
    Jan 10 at 16:09












  • $begingroup$
    The Chinese Remainder Theorem section on using the existence construction gives a sketch of the algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 16:18










  • $begingroup$
    thanks. to clarify, I'm not struggling to apply the CRT to the system of equations (which indeed requires extended euclid/modular inverses) for any given combination of residues. My question is whether there is a smarter way to find the minimum solution without applying CRT to all the possible combinations of residues given the restrictions in the problem.
    $endgroup$
    – JMP
    Jan 10 at 16:34
















3












3








3





$begingroup$


my first question! I work a lot with numbers (finance) but very much an amateur mathematician - please be gentle. I have the following problem that has come from discussions about cryptography:



Given a set of prime numbers $p_1, p_2,...,p_n$ and sets $S_{p_1},S_{p_2},..., S_{p_n}$ containing a selection of modular residue classes with respect to each prime, find the smallest positive integer $q$ that is congruent to a residue class in each $S_i, i = 1,...,n$.



Example:



$p_1 = 7$, $p_2 = 11$, $p_3 = 13 $



$S_1 = {[0]_7,[1]_7,[6]_7}$



$S_2 = {[1]_{11},[5]_{11},[6]_{11},[10]_{11}}$



$S_3 = {[2]_{13},[5]_{13},[8]_{13},[11]_{13}}$



By the Chinese Remainder Theorem (CRT) there will always be a solution if the $S_i$s are non-empty. A brute force approach is to take all combinations of elements from the $S_i$s, apply the CRT to each one in turn and keep track of the smallest.



For example, if we take the first elements from each of the sets above, we can use CRT to solve the following system of equations for q:
$$
q equiv 0 mod 7 \
q equiv 1 mod 11 \
q equiv 2 mod 13
$$



In this case q = 210.



Repeating this process for all the other combinations, we can establish that the smallest possible q is actually 21 ( $equiv [0]_7 equiv [10]_{11} equiv [8]_{13} $)



Of course, this is fine to do when the number of primes and the size of the $S_i$s is small, but blows up quite quickly if they get larger. I'm interested in establishing whether a better algorithm could do this more efficiently and ultimately what is the complexity of this problem in a computational sense.



Any smart ideas? Literature recommendations or theorems in particular that might point me in the right direction would be most appreciated!





Edit: @Yong Hao Ng gives a good improvement on brute force in his answer below, but I think the problem still blows up subexponentially as the number of primes grows, assuming that the number of residue classes for each prime p is roughly p/2. I'm wondering if that's anyone who can improve on this, or indeed suggest why you can't improve on this!










share|cite|improve this question











$endgroup$




my first question! I work a lot with numbers (finance) but very much an amateur mathematician - please be gentle. I have the following problem that has come from discussions about cryptography:



Given a set of prime numbers $p_1, p_2,...,p_n$ and sets $S_{p_1},S_{p_2},..., S_{p_n}$ containing a selection of modular residue classes with respect to each prime, find the smallest positive integer $q$ that is congruent to a residue class in each $S_i, i = 1,...,n$.



Example:



$p_1 = 7$, $p_2 = 11$, $p_3 = 13 $



$S_1 = {[0]_7,[1]_7,[6]_7}$



$S_2 = {[1]_{11},[5]_{11},[6]_{11},[10]_{11}}$



$S_3 = {[2]_{13},[5]_{13},[8]_{13},[11]_{13}}$



By the Chinese Remainder Theorem (CRT) there will always be a solution if the $S_i$s are non-empty. A brute force approach is to take all combinations of elements from the $S_i$s, apply the CRT to each one in turn and keep track of the smallest.



For example, if we take the first elements from each of the sets above, we can use CRT to solve the following system of equations for q:
$$
q equiv 0 mod 7 \
q equiv 1 mod 11 \
q equiv 2 mod 13
$$



In this case q = 210.



Repeating this process for all the other combinations, we can establish that the smallest possible q is actually 21 ( $equiv [0]_7 equiv [10]_{11} equiv [8]_{13} $)



Of course, this is fine to do when the number of primes and the size of the $S_i$s is small, but blows up quite quickly if they get larger. I'm interested in establishing whether a better algorithm could do this more efficiently and ultimately what is the complexity of this problem in a computational sense.



Any smart ideas? Literature recommendations or theorems in particular that might point me in the right direction would be most appreciated!





Edit: @Yong Hao Ng gives a good improvement on brute force in his answer below, but I think the problem still blows up subexponentially as the number of primes grows, assuming that the number of residue classes for each prime p is roughly p/2. I'm wondering if that's anyone who can improve on this, or indeed suggest why you can't improve on this!







number-theory prime-numbers modular-arithmetic computational-complexity chinese-remainder-theorem






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 11 at 12:33







JMP

















asked Jan 10 at 14:58









JMPJMP

163




163








  • 2




    $begingroup$
    Look up the Extended Euclidean algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 15:13










  • $begingroup$
    thanks, I am familiar with the extended euclidean algorithm but the useful application here is not obvious to me.. which probably says more about me than your suggestion! Can you elaborate?
    $endgroup$
    – JMP
    Jan 10 at 16:09












  • $begingroup$
    The Chinese Remainder Theorem section on using the existence construction gives a sketch of the algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 16:18










  • $begingroup$
    thanks. to clarify, I'm not struggling to apply the CRT to the system of equations (which indeed requires extended euclid/modular inverses) for any given combination of residues. My question is whether there is a smarter way to find the minimum solution without applying CRT to all the possible combinations of residues given the restrictions in the problem.
    $endgroup$
    – JMP
    Jan 10 at 16:34
















  • 2




    $begingroup$
    Look up the Extended Euclidean algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 15:13










  • $begingroup$
    thanks, I am familiar with the extended euclidean algorithm but the useful application here is not obvious to me.. which probably says more about me than your suggestion! Can you elaborate?
    $endgroup$
    – JMP
    Jan 10 at 16:09












  • $begingroup$
    The Chinese Remainder Theorem section on using the existence construction gives a sketch of the algorithm
    $endgroup$
    – Ross Millikan
    Jan 10 at 16:18










  • $begingroup$
    thanks. to clarify, I'm not struggling to apply the CRT to the system of equations (which indeed requires extended euclid/modular inverses) for any given combination of residues. My question is whether there is a smarter way to find the minimum solution without applying CRT to all the possible combinations of residues given the restrictions in the problem.
    $endgroup$
    – JMP
    Jan 10 at 16:34










2




2




$begingroup$
Look up the Extended Euclidean algorithm
$endgroup$
– Ross Millikan
Jan 10 at 15:13




$begingroup$
Look up the Extended Euclidean algorithm
$endgroup$
– Ross Millikan
Jan 10 at 15:13












$begingroup$
thanks, I am familiar with the extended euclidean algorithm but the useful application here is not obvious to me.. which probably says more about me than your suggestion! Can you elaborate?
$endgroup$
– JMP
Jan 10 at 16:09






$begingroup$
thanks, I am familiar with the extended euclidean algorithm but the useful application here is not obvious to me.. which probably says more about me than your suggestion! Can you elaborate?
$endgroup$
– JMP
Jan 10 at 16:09














$begingroup$
The Chinese Remainder Theorem section on using the existence construction gives a sketch of the algorithm
$endgroup$
– Ross Millikan
Jan 10 at 16:18




$begingroup$
The Chinese Remainder Theorem section on using the existence construction gives a sketch of the algorithm
$endgroup$
– Ross Millikan
Jan 10 at 16:18












$begingroup$
thanks. to clarify, I'm not struggling to apply the CRT to the system of equations (which indeed requires extended euclid/modular inverses) for any given combination of residues. My question is whether there is a smarter way to find the minimum solution without applying CRT to all the possible combinations of residues given the restrictions in the problem.
$endgroup$
– JMP
Jan 10 at 16:34






$begingroup$
thanks. to clarify, I'm not struggling to apply the CRT to the system of equations (which indeed requires extended euclid/modular inverses) for any given combination of residues. My question is whether there is a smarter way to find the minimum solution without applying CRT to all the possible combinations of residues given the restrictions in the problem.
$endgroup$
– JMP
Jan 10 at 16:34












1 Answer
1






active

oldest

votes


















1












$begingroup$

Edit: serious flaw with previous algorithm. Improved algorithm to cater to sparse solutions, which are more common for large data sets.





Split the primes into $P_1$ and $P_2$ such that the number of residue classes are as close as possible. Let $Q_1,Q_2$ be the product of primes in $P_1$ and $P_2$ respectively.



First we compute the residue classes $R_1$ and $R_2$ of $P_1$ and $P_2$ respectively. We extend $R_1$ to also satisfy $equiv 0pmod{Q_2}$ and call it $S_1$. i.e. each $s_1in S_1$ satisfies
$$
begin{align}
s_1pmod{Q_1} &in R_1\
s_1 &equiv 0 pmod{Q_2}
end{align}
$$

Similarly, we extend $R_2$ to also satisfy $equiv 0 pmod{Q_1}$. So each $s_2in S_2$ satisfies
$$
begin{align}
s_2 &equiv 0 pmod{Q_1}\
s_2 pmod{Q_2} &in R_2
end{align}
$$

Therefore $S_1,S_2$ are CRTs $pmod{Q_1Q_2}$, the full composite.





Now consider all the different sums
$$
m = s_1+s_2 pmod{Q_1Q_2},quad s_1in S_1, s_2in S_2
$$

Taking $pmod {Q_1}$, since $s_2equiv 0 pmod{Q_1}$ we have
$$
m equiv s_1 pmod{Q_1} in R_1
$$

Similarly, taking $pmod{Q_2}$, since $s_1equiv 0 pmod{Q_2}$ we have
$$
m equiv s_2 pmod{Q_2} in R_2
$$

Therefore the $m$ constructed in this way is exactly all the residue classes $pmod{Q_1Q_2}$ of the original problem.





This means it suffices to find the smallest $m pmod{Q_1Q_2}$ possible. Since $0leq s_1,s_2< Q_1Q_2$, this is either the sum of the smallest $s_1$ and $s_2$, or a sum of $s_1,s_2$ is just a bit more than $Q_1Q_2$.



Let $S_1$ and $S_2$ be sorted. To find the smallest for the latter type, we start with the largest $s_1$ and find the smallest $s_2$ such that $s_1+s_2geq Q_1Q_2$. Now we iteratively move to a smaller $s_1$, at each step if the new $s_1$ causes $s_1+s_2< Q_1Q_2$ then we also move $s_2$ to a larger one until $s_1+s_2geq Q_1Q_2$. It can be seen that this traverse $S_1,S_2$ exactly once.





For the overall complexity analysis, suppose for simplicity that $R_1,R_2$ each has $approx m$ classes, so $2m$ CRTs computed. Then the bruteforce method would compute $2m + m^2approx m^2$ CRTs.



This method would require $2m$ CRTs for constructing $S_1$ and $S_2$. It then requires $2m$ (modulo) sums to check for all the $m=s_1+s_2$.



Assuming that CRT is more costly than the modulo sums, the complexity roughly drops from $m^2$ CRTs to $2m$ CRTs.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    thanks, this is a really thoughtful answer, and in fact is similar to what I have been coming up with (although you got there much faster!), however $R_1$ and $R_2$ grow very quickly (I think subexponentially, assuming number of residue classes for each prime p is roughly p/2 ) as the number of primes increases. I am wondering if there's anywhere to better subdivide the problem or tackle it more analytically.
    $endgroup$
    – JMP
    Jan 11 at 12:25










  • $begingroup$
    @JMP It might be easy if the density of the candidates is very high. Then chances are the solution is in $[0, m)$ for some small $m$ and we can specifically target that region. However I guess in practice it's much more likely to be rather sparse. For your example the density is 1/2 per prime, so if you have $k$ primes of size $m$ the solution is quite likely to be around $m^k / (m/2)^k = 2^k$. So even if you have 30 primes this is just a search of up to $2^{30}$.
    $endgroup$
    – Yong Hao Ng
    Jan 11 at 16:07










  • $begingroup$
    Supposing the density was roughly p/2, but that somehow you did know that the solution was in [0,m), does this really change the problem? Is it, as you say, possible to "specifically target that region"? As I see it, finding a global minimum is hard precisely because you can't bound the solution when working with different moduli.
    $endgroup$
    – JMP
    Jan 13 at 21:27










  • $begingroup$
    @JMP The basic idea I was thinking about is to directly check whether the solution is $x=1,x=2,dots,x=m$. For each $x$ we try, to see whether it is valid for one prime ${p_i}$ it suffices to check if $xpmod {p_i}$ is in $S_i$, so just $k$ modulo reductions and $k$ binary searches. This method would be terrible if the expected range $[0,m)$ is large though.
    $endgroup$
    – Yong Hao Ng
    Jan 14 at 2:19














Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3068732%2falgorithm-for-finding-the-smallest-integer-that-satisfies-several-modular-congru%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Edit: serious flaw with previous algorithm. Improved algorithm to cater to sparse solutions, which are more common for large data sets.





Split the primes into $P_1$ and $P_2$ such that the number of residue classes are as close as possible. Let $Q_1,Q_2$ be the product of primes in $P_1$ and $P_2$ respectively.



First we compute the residue classes $R_1$ and $R_2$ of $P_1$ and $P_2$ respectively. We extend $R_1$ to also satisfy $equiv 0pmod{Q_2}$ and call it $S_1$. i.e. each $s_1in S_1$ satisfies
$$
begin{align}
s_1pmod{Q_1} &in R_1\
s_1 &equiv 0 pmod{Q_2}
end{align}
$$

Similarly, we extend $R_2$ to also satisfy $equiv 0 pmod{Q_1}$. So each $s_2in S_2$ satisfies
$$
begin{align}
s_2 &equiv 0 pmod{Q_1}\
s_2 pmod{Q_2} &in R_2
end{align}
$$

Therefore $S_1,S_2$ are CRTs $pmod{Q_1Q_2}$, the full composite.





Now consider all the different sums
$$
m = s_1+s_2 pmod{Q_1Q_2},quad s_1in S_1, s_2in S_2
$$

Taking $pmod {Q_1}$, since $s_2equiv 0 pmod{Q_1}$ we have
$$
m equiv s_1 pmod{Q_1} in R_1
$$

Similarly, taking $pmod{Q_2}$, since $s_1equiv 0 pmod{Q_2}$ we have
$$
m equiv s_2 pmod{Q_2} in R_2
$$

Therefore the $m$ constructed in this way is exactly all the residue classes $pmod{Q_1Q_2}$ of the original problem.





This means it suffices to find the smallest $m pmod{Q_1Q_2}$ possible. Since $0leq s_1,s_2< Q_1Q_2$, this is either the sum of the smallest $s_1$ and $s_2$, or a sum of $s_1,s_2$ is just a bit more than $Q_1Q_2$.



Let $S_1$ and $S_2$ be sorted. To find the smallest for the latter type, we start with the largest $s_1$ and find the smallest $s_2$ such that $s_1+s_2geq Q_1Q_2$. Now we iteratively move to a smaller $s_1$, at each step if the new $s_1$ causes $s_1+s_2< Q_1Q_2$ then we also move $s_2$ to a larger one until $s_1+s_2geq Q_1Q_2$. It can be seen that this traverse $S_1,S_2$ exactly once.





For the overall complexity analysis, suppose for simplicity that $R_1,R_2$ each has $approx m$ classes, so $2m$ CRTs computed. Then the bruteforce method would compute $2m + m^2approx m^2$ CRTs.



This method would require $2m$ CRTs for constructing $S_1$ and $S_2$. It then requires $2m$ (modulo) sums to check for all the $m=s_1+s_2$.



Assuming that CRT is more costly than the modulo sums, the complexity roughly drops from $m^2$ CRTs to $2m$ CRTs.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    thanks, this is a really thoughtful answer, and in fact is similar to what I have been coming up with (although you got there much faster!), however $R_1$ and $R_2$ grow very quickly (I think subexponentially, assuming number of residue classes for each prime p is roughly p/2 ) as the number of primes increases. I am wondering if there's anywhere to better subdivide the problem or tackle it more analytically.
    $endgroup$
    – JMP
    Jan 11 at 12:25










  • $begingroup$
    @JMP It might be easy if the density of the candidates is very high. Then chances are the solution is in $[0, m)$ for some small $m$ and we can specifically target that region. However I guess in practice it's much more likely to be rather sparse. For your example the density is 1/2 per prime, so if you have $k$ primes of size $m$ the solution is quite likely to be around $m^k / (m/2)^k = 2^k$. So even if you have 30 primes this is just a search of up to $2^{30}$.
    $endgroup$
    – Yong Hao Ng
    Jan 11 at 16:07










  • $begingroup$
    Supposing the density was roughly p/2, but that somehow you did know that the solution was in [0,m), does this really change the problem? Is it, as you say, possible to "specifically target that region"? As I see it, finding a global minimum is hard precisely because you can't bound the solution when working with different moduli.
    $endgroup$
    – JMP
    Jan 13 at 21:27










  • $begingroup$
    @JMP The basic idea I was thinking about is to directly check whether the solution is $x=1,x=2,dots,x=m$. For each $x$ we try, to see whether it is valid for one prime ${p_i}$ it suffices to check if $xpmod {p_i}$ is in $S_i$, so just $k$ modulo reductions and $k$ binary searches. This method would be terrible if the expected range $[0,m)$ is large though.
    $endgroup$
    – Yong Hao Ng
    Jan 14 at 2:19


















1












$begingroup$

Edit: serious flaw with previous algorithm. Improved algorithm to cater to sparse solutions, which are more common for large data sets.





Split the primes into $P_1$ and $P_2$ such that the number of residue classes are as close as possible. Let $Q_1,Q_2$ be the product of primes in $P_1$ and $P_2$ respectively.



First we compute the residue classes $R_1$ and $R_2$ of $P_1$ and $P_2$ respectively. We extend $R_1$ to also satisfy $equiv 0pmod{Q_2}$ and call it $S_1$. i.e. each $s_1in S_1$ satisfies
$$
begin{align}
s_1pmod{Q_1} &in R_1\
s_1 &equiv 0 pmod{Q_2}
end{align}
$$

Similarly, we extend $R_2$ to also satisfy $equiv 0 pmod{Q_1}$. So each $s_2in S_2$ satisfies
$$
begin{align}
s_2 &equiv 0 pmod{Q_1}\
s_2 pmod{Q_2} &in R_2
end{align}
$$

Therefore $S_1,S_2$ are CRTs $pmod{Q_1Q_2}$, the full composite.





Now consider all the different sums
$$
m = s_1+s_2 pmod{Q_1Q_2},quad s_1in S_1, s_2in S_2
$$

Taking $pmod {Q_1}$, since $s_2equiv 0 pmod{Q_1}$ we have
$$
m equiv s_1 pmod{Q_1} in R_1
$$

Similarly, taking $pmod{Q_2}$, since $s_1equiv 0 pmod{Q_2}$ we have
$$
m equiv s_2 pmod{Q_2} in R_2
$$

Therefore the $m$ constructed in this way is exactly all the residue classes $pmod{Q_1Q_2}$ of the original problem.





This means it suffices to find the smallest $m pmod{Q_1Q_2}$ possible. Since $0leq s_1,s_2< Q_1Q_2$, this is either the sum of the smallest $s_1$ and $s_2$, or a sum of $s_1,s_2$ is just a bit more than $Q_1Q_2$.



Let $S_1$ and $S_2$ be sorted. To find the smallest for the latter type, we start with the largest $s_1$ and find the smallest $s_2$ such that $s_1+s_2geq Q_1Q_2$. Now we iteratively move to a smaller $s_1$, at each step if the new $s_1$ causes $s_1+s_2< Q_1Q_2$ then we also move $s_2$ to a larger one until $s_1+s_2geq Q_1Q_2$. It can be seen that this traverse $S_1,S_2$ exactly once.





For the overall complexity analysis, suppose for simplicity that $R_1,R_2$ each has $approx m$ classes, so $2m$ CRTs computed. Then the bruteforce method would compute $2m + m^2approx m^2$ CRTs.



This method would require $2m$ CRTs for constructing $S_1$ and $S_2$. It then requires $2m$ (modulo) sums to check for all the $m=s_1+s_2$.



Assuming that CRT is more costly than the modulo sums, the complexity roughly drops from $m^2$ CRTs to $2m$ CRTs.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    thanks, this is a really thoughtful answer, and in fact is similar to what I have been coming up with (although you got there much faster!), however $R_1$ and $R_2$ grow very quickly (I think subexponentially, assuming number of residue classes for each prime p is roughly p/2 ) as the number of primes increases. I am wondering if there's anywhere to better subdivide the problem or tackle it more analytically.
    $endgroup$
    – JMP
    Jan 11 at 12:25










  • $begingroup$
    @JMP It might be easy if the density of the candidates is very high. Then chances are the solution is in $[0, m)$ for some small $m$ and we can specifically target that region. However I guess in practice it's much more likely to be rather sparse. For your example the density is 1/2 per prime, so if you have $k$ primes of size $m$ the solution is quite likely to be around $m^k / (m/2)^k = 2^k$. So even if you have 30 primes this is just a search of up to $2^{30}$.
    $endgroup$
    – Yong Hao Ng
    Jan 11 at 16:07










  • $begingroup$
    Supposing the density was roughly p/2, but that somehow you did know that the solution was in [0,m), does this really change the problem? Is it, as you say, possible to "specifically target that region"? As I see it, finding a global minimum is hard precisely because you can't bound the solution when working with different moduli.
    $endgroup$
    – JMP
    Jan 13 at 21:27










  • $begingroup$
    @JMP The basic idea I was thinking about is to directly check whether the solution is $x=1,x=2,dots,x=m$. For each $x$ we try, to see whether it is valid for one prime ${p_i}$ it suffices to check if $xpmod {p_i}$ is in $S_i$, so just $k$ modulo reductions and $k$ binary searches. This method would be terrible if the expected range $[0,m)$ is large though.
    $endgroup$
    – Yong Hao Ng
    Jan 14 at 2:19
















1












1








1





$begingroup$

Edit: serious flaw with previous algorithm. Improved algorithm to cater to sparse solutions, which are more common for large data sets.





Split the primes into $P_1$ and $P_2$ such that the number of residue classes are as close as possible. Let $Q_1,Q_2$ be the product of primes in $P_1$ and $P_2$ respectively.



First we compute the residue classes $R_1$ and $R_2$ of $P_1$ and $P_2$ respectively. We extend $R_1$ to also satisfy $equiv 0pmod{Q_2}$ and call it $S_1$. i.e. each $s_1in S_1$ satisfies
$$
begin{align}
s_1pmod{Q_1} &in R_1\
s_1 &equiv 0 pmod{Q_2}
end{align}
$$

Similarly, we extend $R_2$ to also satisfy $equiv 0 pmod{Q_1}$. So each $s_2in S_2$ satisfies
$$
begin{align}
s_2 &equiv 0 pmod{Q_1}\
s_2 pmod{Q_2} &in R_2
end{align}
$$

Therefore $S_1,S_2$ are CRTs $pmod{Q_1Q_2}$, the full composite.





Now consider all the different sums
$$
m = s_1+s_2 pmod{Q_1Q_2},quad s_1in S_1, s_2in S_2
$$

Taking $pmod {Q_1}$, since $s_2equiv 0 pmod{Q_1}$ we have
$$
m equiv s_1 pmod{Q_1} in R_1
$$

Similarly, taking $pmod{Q_2}$, since $s_1equiv 0 pmod{Q_2}$ we have
$$
m equiv s_2 pmod{Q_2} in R_2
$$

Therefore the $m$ constructed in this way is exactly all the residue classes $pmod{Q_1Q_2}$ of the original problem.





This means it suffices to find the smallest $m pmod{Q_1Q_2}$ possible. Since $0leq s_1,s_2< Q_1Q_2$, this is either the sum of the smallest $s_1$ and $s_2$, or a sum of $s_1,s_2$ is just a bit more than $Q_1Q_2$.



Let $S_1$ and $S_2$ be sorted. To find the smallest for the latter type, we start with the largest $s_1$ and find the smallest $s_2$ such that $s_1+s_2geq Q_1Q_2$. Now we iteratively move to a smaller $s_1$, at each step if the new $s_1$ causes $s_1+s_2< Q_1Q_2$ then we also move $s_2$ to a larger one until $s_1+s_2geq Q_1Q_2$. It can be seen that this traverse $S_1,S_2$ exactly once.





For the overall complexity analysis, suppose for simplicity that $R_1,R_2$ each has $approx m$ classes, so $2m$ CRTs computed. Then the bruteforce method would compute $2m + m^2approx m^2$ CRTs.



This method would require $2m$ CRTs for constructing $S_1$ and $S_2$. It then requires $2m$ (modulo) sums to check for all the $m=s_1+s_2$.



Assuming that CRT is more costly than the modulo sums, the complexity roughly drops from $m^2$ CRTs to $2m$ CRTs.






share|cite|improve this answer











$endgroup$



Edit: serious flaw with previous algorithm. Improved algorithm to cater to sparse solutions, which are more common for large data sets.





Split the primes into $P_1$ and $P_2$ such that the number of residue classes are as close as possible. Let $Q_1,Q_2$ be the product of primes in $P_1$ and $P_2$ respectively.



First we compute the residue classes $R_1$ and $R_2$ of $P_1$ and $P_2$ respectively. We extend $R_1$ to also satisfy $equiv 0pmod{Q_2}$ and call it $S_1$. i.e. each $s_1in S_1$ satisfies
$$
begin{align}
s_1pmod{Q_1} &in R_1\
s_1 &equiv 0 pmod{Q_2}
end{align}
$$

Similarly, we extend $R_2$ to also satisfy $equiv 0 pmod{Q_1}$. So each $s_2in S_2$ satisfies
$$
begin{align}
s_2 &equiv 0 pmod{Q_1}\
s_2 pmod{Q_2} &in R_2
end{align}
$$

Therefore $S_1,S_2$ are CRTs $pmod{Q_1Q_2}$, the full composite.





Now consider all the different sums
$$
m = s_1+s_2 pmod{Q_1Q_2},quad s_1in S_1, s_2in S_2
$$

Taking $pmod {Q_1}$, since $s_2equiv 0 pmod{Q_1}$ we have
$$
m equiv s_1 pmod{Q_1} in R_1
$$

Similarly, taking $pmod{Q_2}$, since $s_1equiv 0 pmod{Q_2}$ we have
$$
m equiv s_2 pmod{Q_2} in R_2
$$

Therefore the $m$ constructed in this way is exactly all the residue classes $pmod{Q_1Q_2}$ of the original problem.





This means it suffices to find the smallest $m pmod{Q_1Q_2}$ possible. Since $0leq s_1,s_2< Q_1Q_2$, this is either the sum of the smallest $s_1$ and $s_2$, or a sum of $s_1,s_2$ is just a bit more than $Q_1Q_2$.



Let $S_1$ and $S_2$ be sorted. To find the smallest for the latter type, we start with the largest $s_1$ and find the smallest $s_2$ such that $s_1+s_2geq Q_1Q_2$. Now we iteratively move to a smaller $s_1$, at each step if the new $s_1$ causes $s_1+s_2< Q_1Q_2$ then we also move $s_2$ to a larger one until $s_1+s_2geq Q_1Q_2$. It can be seen that this traverse $S_1,S_2$ exactly once.





For the overall complexity analysis, suppose for simplicity that $R_1,R_2$ each has $approx m$ classes, so $2m$ CRTs computed. Then the bruteforce method would compute $2m + m^2approx m^2$ CRTs.



This method would require $2m$ CRTs for constructing $S_1$ and $S_2$. It then requires $2m$ (modulo) sums to check for all the $m=s_1+s_2$.



Assuming that CRT is more costly than the modulo sums, the complexity roughly drops from $m^2$ CRTs to $2m$ CRTs.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 11 at 11:08

























answered Jan 11 at 9:42









Yong Hao NgYong Hao Ng

3,7191222




3,7191222












  • $begingroup$
    thanks, this is a really thoughtful answer, and in fact is similar to what I have been coming up with (although you got there much faster!), however $R_1$ and $R_2$ grow very quickly (I think subexponentially, assuming number of residue classes for each prime p is roughly p/2 ) as the number of primes increases. I am wondering if there's anywhere to better subdivide the problem or tackle it more analytically.
    $endgroup$
    – JMP
    Jan 11 at 12:25










  • $begingroup$
    @JMP It might be easy if the density of the candidates is very high. Then chances are the solution is in $[0, m)$ for some small $m$ and we can specifically target that region. However I guess in practice it's much more likely to be rather sparse. For your example the density is 1/2 per prime, so if you have $k$ primes of size $m$ the solution is quite likely to be around $m^k / (m/2)^k = 2^k$. So even if you have 30 primes this is just a search of up to $2^{30}$.
    $endgroup$
    – Yong Hao Ng
    Jan 11 at 16:07










  • $begingroup$
    Supposing the density was roughly p/2, but that somehow you did know that the solution was in [0,m), does this really change the problem? Is it, as you say, possible to "specifically target that region"? As I see it, finding a global minimum is hard precisely because you can't bound the solution when working with different moduli.
    $endgroup$
    – JMP
    Jan 13 at 21:27










  • $begingroup$
    @JMP The basic idea I was thinking about is to directly check whether the solution is $x=1,x=2,dots,x=m$. For each $x$ we try, to see whether it is valid for one prime ${p_i}$ it suffices to check if $xpmod {p_i}$ is in $S_i$, so just $k$ modulo reductions and $k$ binary searches. This method would be terrible if the expected range $[0,m)$ is large though.
    $endgroup$
    – Yong Hao Ng
    Jan 14 at 2:19




















  • $begingroup$
    thanks, this is a really thoughtful answer, and in fact is similar to what I have been coming up with (although you got there much faster!), however $R_1$ and $R_2$ grow very quickly (I think subexponentially, assuming number of residue classes for each prime p is roughly p/2 ) as the number of primes increases. I am wondering if there's anywhere to better subdivide the problem or tackle it more analytically.
    $endgroup$
    – JMP
    Jan 11 at 12:25










  • $begingroup$
    @JMP It might be easy if the density of the candidates is very high. Then chances are the solution is in $[0, m)$ for some small $m$ and we can specifically target that region. However I guess in practice it's much more likely to be rather sparse. For your example the density is 1/2 per prime, so if you have $k$ primes of size $m$ the solution is quite likely to be around $m^k / (m/2)^k = 2^k$. So even if you have 30 primes this is just a search of up to $2^{30}$.
    $endgroup$
    – Yong Hao Ng
    Jan 11 at 16:07










  • $begingroup$
    Supposing the density was roughly p/2, but that somehow you did know that the solution was in [0,m), does this really change the problem? Is it, as you say, possible to "specifically target that region"? As I see it, finding a global minimum is hard precisely because you can't bound the solution when working with different moduli.
    $endgroup$
    – JMP
    Jan 13 at 21:27










  • $begingroup$
    @JMP The basic idea I was thinking about is to directly check whether the solution is $x=1,x=2,dots,x=m$. For each $x$ we try, to see whether it is valid for one prime ${p_i}$ it suffices to check if $xpmod {p_i}$ is in $S_i$, so just $k$ modulo reductions and $k$ binary searches. This method would be terrible if the expected range $[0,m)$ is large though.
    $endgroup$
    – Yong Hao Ng
    Jan 14 at 2:19


















$begingroup$
thanks, this is a really thoughtful answer, and in fact is similar to what I have been coming up with (although you got there much faster!), however $R_1$ and $R_2$ grow very quickly (I think subexponentially, assuming number of residue classes for each prime p is roughly p/2 ) as the number of primes increases. I am wondering if there's anywhere to better subdivide the problem or tackle it more analytically.
$endgroup$
– JMP
Jan 11 at 12:25




$begingroup$
thanks, this is a really thoughtful answer, and in fact is similar to what I have been coming up with (although you got there much faster!), however $R_1$ and $R_2$ grow very quickly (I think subexponentially, assuming number of residue classes for each prime p is roughly p/2 ) as the number of primes increases. I am wondering if there's anywhere to better subdivide the problem or tackle it more analytically.
$endgroup$
– JMP
Jan 11 at 12:25












$begingroup$
@JMP It might be easy if the density of the candidates is very high. Then chances are the solution is in $[0, m)$ for some small $m$ and we can specifically target that region. However I guess in practice it's much more likely to be rather sparse. For your example the density is 1/2 per prime, so if you have $k$ primes of size $m$ the solution is quite likely to be around $m^k / (m/2)^k = 2^k$. So even if you have 30 primes this is just a search of up to $2^{30}$.
$endgroup$
– Yong Hao Ng
Jan 11 at 16:07




$begingroup$
@JMP It might be easy if the density of the candidates is very high. Then chances are the solution is in $[0, m)$ for some small $m$ and we can specifically target that region. However I guess in practice it's much more likely to be rather sparse. For your example the density is 1/2 per prime, so if you have $k$ primes of size $m$ the solution is quite likely to be around $m^k / (m/2)^k = 2^k$. So even if you have 30 primes this is just a search of up to $2^{30}$.
$endgroup$
– Yong Hao Ng
Jan 11 at 16:07












$begingroup$
Supposing the density was roughly p/2, but that somehow you did know that the solution was in [0,m), does this really change the problem? Is it, as you say, possible to "specifically target that region"? As I see it, finding a global minimum is hard precisely because you can't bound the solution when working with different moduli.
$endgroup$
– JMP
Jan 13 at 21:27




$begingroup$
Supposing the density was roughly p/2, but that somehow you did know that the solution was in [0,m), does this really change the problem? Is it, as you say, possible to "specifically target that region"? As I see it, finding a global minimum is hard precisely because you can't bound the solution when working with different moduli.
$endgroup$
– JMP
Jan 13 at 21:27












$begingroup$
@JMP The basic idea I was thinking about is to directly check whether the solution is $x=1,x=2,dots,x=m$. For each $x$ we try, to see whether it is valid for one prime ${p_i}$ it suffices to check if $xpmod {p_i}$ is in $S_i$, so just $k$ modulo reductions and $k$ binary searches. This method would be terrible if the expected range $[0,m)$ is large though.
$endgroup$
– Yong Hao Ng
Jan 14 at 2:19






$begingroup$
@JMP The basic idea I was thinking about is to directly check whether the solution is $x=1,x=2,dots,x=m$. For each $x$ we try, to see whether it is valid for one prime ${p_i}$ it suffices to check if $xpmod {p_i}$ is in $S_i$, so just $k$ modulo reductions and $k$ binary searches. This method would be terrible if the expected range $[0,m)$ is large though.
$endgroup$
– Yong Hao Ng
Jan 14 at 2:19




















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3068732%2falgorithm-for-finding-the-smallest-integer-that-satisfies-several-modular-congru%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Cabo Verde

Gyllenstierna