**ABSTRACT**

We wish to find the smallest non-negative integer, , for which y=g where, y, GF(p) (if such an exists). This is the Discrete Logarithm Problem (DLP). A number of strategies have been proposed to solve the DLP, among them, Shanks Baby-Step Giant-Step algorithm, the Pollard Rho algorithm, the Pohlig-Hellman algorithm and the Index-Calculus method. We show that, given certain assumptions about the smoothness of the integers, the index calculus will, in general, out-perform the other three methods, substantially increasing the range of problems which are feasible to solve and thereby threatening the security of the DLP-based crypto-algorithms like, DH key exchange protocol, ElGamal cryptosystem, DSA and many others. In this paper we describe basic principle and implementation procedure to these DLP-crypto algorithms. We will also discuss the general methods of attacking DLP cryptosystems and how secure they are against these general attacks. The mathematical challenge here lies in computing discrete logarithms in finite fields of type Zp, which consist of the integers modulo a large prime p.

PDF Abstract XML References Citation

####
**How to cite this article**

*Journal of Applied Sciences, 5: 1692-1712.*

**DOI:**10.3923/jas.2005.1692.1712

**URL:**https://scialert.net/abstract/?doi=jas.2005.1692.1712

**INTRODUCTION**

Discrete logs have a long history in number theory. Initially, they were used primarily in computations in finite fields (where they typically appeared in the closely related form of Zech’s logarithm). However, they were rather obscure, just like Integer Factorization Problem (IFP). Unlike the latter, they could not even invoke any famous quotes of Gauss^{[1]} about their fundamental importance in mathematics. The status of discrete logs started to grow in the 20th century, as more computations were done and as more thought went into algorithmic questions. It appears that they started to play an important role in cryptography already in the 1950s, long before public key systems appeared on the scene, as cryptosystems based on shift-register sequences displaced those on rotor machines^{[2]}. Discrete logs occur naturally in the context as tools for finding where in a shift register sequence a particular block occurs. The main impetus for the intensive current interest in discrete logs, though, came from the invention of the Diffie-Hellman method^{[3]}.

The most important tool necessary for the implementation of public-key cryptosystems is the Discrete Log Problem (DLP). Many popular modern crypto-algorithms base their security on the DLP^{[1,2]}. Based on the difficulty of this problem, Diffie-Hellman^{[3,4]} proposed the well-known Diffie-Hellman key agreement scheme in 1976. Since then, numerous other cryptographic protocols whose security depends on the DLP have been proposed, including: the ElGamal encryption and signature scheme^{[5]}, the US government Digital Signature Algorithm (DSA)^{[6-9]} is perhaps the best known example of a DLP system, the Schnorr signature scheme^{[10]} and the Nyberg-Reuppel signature scheme^{[11,12]}. Due to interest in these applications, the DLP has been extensively studied by mathematicians for the past 20 years. The mathematical challenge here lies in computing discrete logarithms in finite fields of type Z_{p}, which consist of the integers modulo a large prime p. Although this problem can be considered difficult, there are known sub-exponential time algorithms for solving it, such as the, index calculus^{[12]} and Number Field Sieve (NFS)^{[13]}. In practical terms, sub-exponential time means that a determined hacker with enough processing power can break the system in a few months.

**THE MECHANICS OF DISCRETE LOG PROBLEM**

In an (abelian) group G^{*} (multiplicatively written) we can consider the equation y=x^{n}, x, y ∈ G, n ∈ Z. If x and y are known real numbers and it is also known that x is some power (say, n) of y, then logarithms can be used to find n(″=log_{x}y″) in an efficient manner (here n is known as the index of y ∈ G). However, if x and y are given such that: y= x^{n} = x A x .... x (n-times), then in general it is technically much harder and hence the determination of n cannot be carried out in a reasonable amount of time. This is equivalent to the well-known real logarithm; we call n the discrete logarithm of y related to the base x^{[10]}. The operation exponentiation x→y:=x^{n} can be implemented as a quick, efficient algorithm. As an example, the exponentiation computation of 5^{e}, can be performed pretty efficiently by using binary expansion of the exponent e, for example, for e = 4369_{10} = 1000100010001_{2}, we have:

The DLP can also be implemented in modular arithmetic form and can be defined as follows: given a prime p, a generator g of Z_{p} and a non-zero element γ ε Z_{p}, find the unique integer k, 0≤k≤p–2, such that γ = g^{k}mod p. The integer k is called the discrete logarithm of γ to the base g (which we can write smartly as follows: b^{e} = r(mod p); b^{e} = r and finally e = log_{b}r, where, b is the base, e is the exponent and r is the residual mod p). Here, Z_{p} denotes the set of integers {0, 1, 2,AAAA, p–1}, where, addition and multiplication are performed modulo p. It is well-known that there exists a non-zero element g ∈ Z_{p} such that each non-zero elements in Z_{p} can be written as a power of g; such an element g is called a generator of Z_{p}.

Similarly, we can perform a modular exponentiation easily, for example, the computation of 5^{e} mod p, can be carried out efficiently: after each squaring or after each multiplication by 5 reduced modulo p and then continue. That is, if p = 5779, then, 5^{e} mod p = 4720.

On a similar note, we can easily solve, 5^{e} ≡ 2437 (mod 5779), which is equivalent to determining: e = log_{5} 2437 in Z_{5779} and there is no known method with a similar low complexity. That is, it is easy to take logarithm but not a modular/discrete logarithm. This is the backbone of the crypto-algorithm based discrete logarithm problem^{[1]}.

As an example, let’s solve: b^{4369} ≡ 2437(mod 5779). To solve our DLP, we note that p = 5779 is indeed a prime number, so that by Fermat little theorem: a^{p}^{-1} ≡ 1(mod p) for a ∈ (Z/nZ)^{*}, we have: b^{5778} ≡ 1(mod 5779). It follows that, if we raise both sides of our equation to the power of 529 (i.e., the multiplicative inverse: e^{-1}(mod p-1) = (4369)^{-1} mod 5778 = 529), we find the solution of our DLP: b ≡ r^{1/e}(mod p) ≡ 2437^{529} (mod 5779). = 2249.

In general, if the modulus of DLP is replaced with a product of two primes, then finding the solution becomes naturally infeasible for large moduli, simply because the factorization of large integer number is infeasible. This is the backbone on which the security of public key like RSA cryptosystems^{[7,14]}. In RSA public key cryptosystem, for example, Bob’s public key is (e, n) and his private key is (d,n) where, n is the product of two prime numbers p and q (i.e., n(= pAq)) such that: ed = 1(mod(p-1)(q-1)). The product, n, is the modulus, e is the public exponent and d is the secret exponent. To encrypt a plaintext message M for Bob, Alice has to compute ciphertext: C = M^{e}(mod n). Bob can decrypt C by computing: (C)^{d} = (M^{e})^{d} = M(mod n) = M. No one except Bob can decrypt C since d is only known to Bob. The RSA crypto-algorithm can be broken by factoring n into p and q. If n is factored then (p-1)(q-1) can be found and from this d can be computed. Hence, any adversary that factors n can find the private-key d ≡ e^{-1} (mod (p-1)(q-1)) and with it decrypt any encrypted message^{[7,14]}. Therefore, the algorithm is secure only if the factorization of the carefully chosen sufficiently large two prime numbers requires a super-polynomial amount of time with respect to the size of the number. The key question is, therefore, how large is sufficiently large to make this recovery virtually impossible? In the 1980s it was generally held that prime numbers of a fifty odd digits (i.e., 10^{50}) would suffice. Currently, you need a 1024-bit number to get the same security you got from a 512-bit number in the early 1980s^{[13]}. If you want your keys to remain secure for 20 years, 1024 bits is probably too short (Fig. 1)^{[16]}.

Technical advantages between DLP-based crypto-algorithms and RSA is such that in many cases where algorithms of comparable functionality exist, say one over the finite field of integers modulo a prime p and another using composite integer n of the same size, breaking the discrete log modulo p appears to be somewhat harder than factoring the integer n. For purposes of key generation, many people prefer DH algorithm over RSA, since the session key it generates is evanescent. In the simplest application of RSA to key generation, as seen above, Alice creates a session key and transmits it to Bob using Bob’s public key. An eavesdropper who can coerce Bob, via clever social engineering, into revealing his private key can recover the full text of the communication exchanged between Alice and Bob. On the other hand, if Alice and Bob use DH key exchange protocol to generate the session key, destroy it after the session ends and do not store their communication, then neither coercion nor cryptanalysis will enable the eavesdropper to recover what information was exchanged. It is widely believed that the DSA is based on the discrete logs because it is harder to use it for encryption than if it were based on RSA (and thus on Integer Factorization Problem, IFP).

Standard DLP cryptosystems are based on multiplicative groups with the main operation of exponentiation.

Fig. 1: | Proposed the minimum key sizes (in bits) to be regarded as safe for RSA and ECC |

The corresponding problem in additive (i.e., abelian) groups is: given P and kP = Q (P added to itself k times), find the integer k (i.e., find k = log_{P} Q). This is much more difficult! There is no one-step operation like taking logarithms that we can use to get the solution. So we may know P and kP and yet not be able to find k in a reasonable amount of time. This is called the Discrete Log Problem for abelian groups. We could always repeatedly subtract P from kP till we get 0. But if k is large, this will take us a very long time! Several important cryptosystems are based on the difficulty of solving the DLP over finite abelian groups. The solution is even tougher if the underlying group arises from an elliptic curve over a finite field^{[16]}.

In 1985, Victor Miller (IBM^{[17]}) and Neal Koblitz (University of Washington)^{[18]}, independently realized that the DLP-additive group associated with elliptic curve can be used for similar classical Diffie-Hellman (DH) public-key exchange method. In the same year Scott Vanstone and co-researchers, realized that the invention by Miller and Koblitz was not just a mathematical generalization of the original DH idea, but that some aspect of it could lead to a very promising alternative public key cryptosystems that could be put into real practical application-this lead to the formation of what is today, Certicom^{[19]}. In Elliptic Curve Cryptography (ECC)^{[16]}, the multiplicative group is replaced by the additive group of elliptic curve points and exponentiation operation by scalar multiplication of a point (i.e. calculation of g^{k} = gAg....g (k-times) for a generator g of a multiplicative group is replaced by calculation of [k]P = P+P+.....+P (k*-*times) for a generator point P of an additive group of elliptic curve points). Thus, the computational performance of cryptographic protocols based on elliptic curves strongly depends on efficiency of the scalar multiplication. Further, Elliptic curve cryptosystems (ECC) appear to offer the possibility of using much smaller key sizes than would be required by RSA-type crypto-algorithms of comparable security (Fig. 1). For more detail on the implementation of ECC^{[16-19]}.

**Time complexity of DLP:** Suppose G = Z^{*}_{p} with p a 200-bit prime. Using 2^{20} computers, each one running at 400 MHz and assuming that one exponentiation (b^{e} mod p) takes only one machine cycle! Then 400A2^{20} ≈ 2^{29} exponentiation can be made per second per computer. As there are around 2^{25} sec in a year, it would take about 2^{200}/(2A2^{20+29+25}) ≈ 2^{125} years to find the desired discrete logarithm, on average, by trial and error method.

The presumed intractability of the DLP, for appropriate choices of G, in contrast to the relative efficiency of calculating the discrete exponentiation, has made the DLP a basic building block of many cryptographic applications, including public-key encryption algorithms, **digital signature** schemes and key agreement protocols^{[2-6,11,16-19]}.

**DIFFIE-HELLMAN MULTIUSER CRYPTOSYSTEM**

Prior to 1970, symmetric key cryptosystems had been the crypto-mode in existence. In a symmetric key crypto-protocols, a common key (the master shared secrete key) are used by both communicating parties to encrypt and decrypt messages. These symmetric key crypto-protocols provide-high speed key and communication throughput but have the drawback that a common (or session) key must be established before communication between parties can be begin. The process of exchanging the crypto-keys is referred to as key distribution and can be very difficult^{[20]}!

It was Merkle who first introduced the basic concept of public key cryptosystems with a view to overcome the key distribution problem^{[7,21]}. However, it was W. Diffie and Hellman^{[3,4]} who were the first to introduce practical public-key cryptography which eliminated the need for key distribution encountered with the private-key cryptosystems and it is widely used today. The system was discovered independently by GCHQ (British Intelligence) a few years before Diffie-Hellman found it, but couldn’t tell anyone about their work; perhaps it was discovered by others before. That this system was discovered independently more than once shouldn’t surprise you, given how simple it is! The encoding function here is a trapdoor function-one whose inverse is impractical to implement, unless some extra information is available. This extra information (called the decrypting-key) is not required for encrypting the message, yet is essential for decrypting it in reasonable time. The beauty of such a system is that the encrypting process need not be kept secret. Each user has his own or a personal encrypting-function, which is public information (hence the name public-key) and a decoding key, which he keeps secret.

**The basic concept of public-key crypto-algorithm:** In a public-key cryptosystems each user places in a public-key server an encryption procedure E. That is, the public-key server is a directory giving the encryption procedure of each user. The user keeps secret the details of his corresponding decryption procedure D. These procedures have the following properties:

a. | Decrypting the encrypted form of plaintext message C = E(T) yields T, i.e.: |

D(C) = D(E(T)) = T | |

b. | Both E and D are easy to compute. |

c. | By publicly revealing E the user does not reveal an easy way to compute D. This means that in practice only he can decrypt messages encrypted with E, or compute D efficiently. |

d. | If a message T is first deciphered and enciphered, T is the result, i.e.: |

E(D(T)) = T |

An encryption (or decryption) procedure typically consist of a general method and an encryption key. The general method, under control of the key, encrypts a plaintext message T to obtain the form of the message or ciphertext *C*. Everyone can use the same general method; the security of a given message will rest on the security of the key. Revealing an encryption algorithm then means revealing the key.

When the user reveals E, he reveals a very inefficient method of computing D(C): testing all possible messages T until one finds E(T) = C. If property (c) is satisfied the number of such messages to test will be so large that this approach is impractical.

A function E satisfying (a)-(c) is a trap-door one-way function; if it also satisfies (d) it is a trap-door one-way permutation. In 1974 the first detailed description of such a one-way function was published^{[7,21]}. That is, a one-to-one function f: X→Y is one-way if it is easy to compute a polynomial function f(x) for any x ∈ X but hard to compute f^{-1}(y) for most randomly chosen y in the range f. Diffie-Hellman^{[3,4]} were the first to introduce the concept of trap-door one-way functions into crypto-algorithm. These functions are called one-way because they are easy to compute in one direction but (apparently) very difficult to compute in the other direction. They are called trap-doors functions since the inverse functions are in fact easy to compute once certain private trap-door information is known. A trap-door one-way function that also satisfies (d) must be a permutation: every message is the ciphertext for some other message and every ciphertext is itself a permissible message. (The mapping is one-to-one and onto). Property (d) is needed to implement digital signatures scheme^{[7,8]}. Public key cryptosystems, however, tend to be more computation intensive than symmetric one, many of which are fully executed in hardware alone^{[20]}, but their algebraic foundations provide robust proofs of security that few symmetric crypto-schemes can match^{[2]}.

**The mechanics of Diffie-Hellman algorithm:** The Diffie-Hellman procedure depends on rather magical properties of whole numbers. In the nineteenth century Gauss established an elaborate body of theorems based on the idea of arithmetical remainders. He adopted a notation that is widely used by mathematicians today, y = x(mod n), what this means is that if you divide y by n you get a remainder x (cf. 15 = 2 mod 13). The symbol = (or ≡) is called a congruence relation and simply means equivalent to, while mod is short for modulus, or modulo, we will use both symbols interchangeably. If you multiply two numbers in this system you also still get a number between 1 and 13. For example, 4x6 mod 13 = 24 mod 13 = 11 mod 13. This then is just a notation.

Diffie-Hellman technique makes use of the apparent difficulty of computing logarithms over a finite field F_{p} with p number of elements. The systems parameters, therefore, consist of a large prime number p and a generator g of the multiplicative group Z^{*}_{p} whose powers modulo p generate a large number of elements. Let:

(1) |

where, g is a fixed primitive element of F_{p}, then x is arranged to as the logarithm of y to base g, modulo p:

(2) |

Calculation of y from x is easy, taking at most, 2Alog_{2} p, multiplications^{[22]}. For example, x = 34:

y = g ^{34} = ((((g^{2})^{2})^{2})^{2})^{2}.g ^{2} |

Computing x from y, on the other hand can be much more difficult and, for certain carefully chosen values of p, requires an , using the best known algorithm^{[23]}. Security of DH, therefore, depends crucially on the security of computing logarithm modulo p and if an algorithm whose complexity grew as, log_{2} p, were to be found then DH crypto-security can be broken^{[1]}.

**CLASSICAL DIFFIE-HELLAM KEY EXCHANGE PROTOCOL**

Let us consider our usual communicating partners, Alice and Bob and don’t forget the benevolent cracker Eve. Alice must communicate vital information to Bob that must reach him securely. Their communications are monitored by Eve, who must not discover the message. If Alice and Bob could agree on a secret encoding key, they could encrypt their message. Fortunately, Alice knows Diffie-Hellman algorithm.

Now let’s apply this mathematical procedure to Diffie-Hellman algorithm. The method works as follows: (i) Both the active participants (say Bob and Alice) must first agree on two randomly generated prime numbers, p and q. Numbers p and q can be publicly known. Parameter p is a prime number and parameter g (usually called a generator) is an integer less than p, which is capable of generating every element from 1 to p-1 when multiplied by itself a certain number of times, modulo the prime p. (ii) Each participant must next choose another randomly generated number, perform a mathematical operation that involves p, q and the chosen number; and then transmit the result to the other participant.

**Generation of shared key using DH key exchange protocol:** The systems parameters consist of a large prime number p and a generator g of the multiplicative group Z^{*}_{p} whose powers modulo p generate a large number of elements. Alice and Bob agree on a prime number p and an integer g that has order p-1 modulo p.(So g^{p}^{-1} ≡ 1 (mod p)), but for any positive s_{A}<p-1.) Alice chooses a random number s_{A}< p and Bob chooses a random number s_{B}<p. Alice sends (mod p) to Bob and Bob sends (mod p) Alice.

Alice (A) can now compute the secret-key:

Likewise, Bob (B) computes the secret-key:

One may notice that: , is the session or shared master secret key between Alice and Bob for future secure communication. The session-keys are the same (since s = s_{A}s_{B} =s_{B}s_{A}); hence Alice and Bob now have a shared master secret-key k. Future communications occur using the session key k. Thus, the session-key, k can be used with a private-key crypto-algorithms such as Data Encryption Standard (DES), 3DES or AES, for encryption purposes.

Now Alice uses the master secret key k to send Bob an encrypted version of her critical message. Bob, who also knows k, is able to decode the message. Meanwhile, hacker (Eve) see both, (modp) but she aren’t able to use this information to deduce either s_{A}, s_{B} or g^{s} (mod p) quickly enough to stop Bob from thwarting her plan. This is the backbone of the DLP crypto-security.

The only information that Eve knows is group If Eve can recover g^{s} from this data then Eve is said to solve Diffie-Hellman Problem (DHP). More specifically, we can define DHP as follows: Given a finite cyclic group G, with generator g ∈ G and elements α, β ∈ G. The DHP asks for γ ∈ G such that: . It is easy to see that if Eve can find discrete logarithms in G (residue group modulo p) then she can solve DHP. It is believed for most groups in use in cryptography that DHP and the Discrete Log Problem (DLP) are equivalent in complexity-theoretic sense, there is a polynomial time reduction of one problem to the other and vice versa^{[2,24,25]}. In any case, if one wishes to use the DHP in particular group as the basis of a cryptosystem, it is necessary that the DLP be hard in that group! Although there are many groups that have been proposed for which DHP may be hard and used securely-however, in practice there are only two that are most often used: One in multiplicative group (F_{q})^{*} of finite field of order q and which is employed in this study. Its slight modification is employed in Digital Signature Algorithm (DSA)^{[25]}.

However, it turns out that if the numbers are all sufficiently large, it is very hard to calculate the discrete logarithm in a reasonable time. The security of the Diffie-Hellman algorithm depends on this fact. Alternatively, the eavesdropper can have access to p, q, S_{A} and S_{B} but neither k_{A} nor k_{B}. As a result, the eavesdropper cannot calculate k.

If p is a prime slightly less than 2^{n}, then all quantities are representable as n-bit numbers. Exponentiation then takes at most 2n multiplications, mod p, while by hypothesis taking logs require, q^{1/2} = 2^{1/2}, operations. The cryptanalytic effort therefore grows exponentially relative to the computational efforts. If n = 200, then at most 400 multiplications are required to compute k_{A} from s_{A}, or k from k_{A} and s_{B}, yet taking logs, mod p, requires 2^{100} or approximately 10^{30} operations.

**Implementation of DH key exchange protocol:** In practice the systems parameters consist of a large prime number p and a generator g of the multiplicative group Z^{*}_{p} whose powers modulo p generate a large number of elements. For practical application and security reasons the crypto-keys must be of 1024-bit or greater is recommended (Fig. 1). However, here we consider an overly simple numbers to help us understand the basic implementation of DH procedure.

**Systems parameters:** The two communicating entities, Alice (A) and Bob (B), both selects the prime p = 12884901893(a 30-bit or 10 decimal digits) and a generator g = 12 of order p-1 = 12884901892; where, g, k_{A}, k_{B} ∈ Z^{*}_{p}.

**Alice (A):**

• | Select a random integer s_{A} = 2^{10} as her secret-key. |

• | Compute and sends to Bob: k_{A} |

**Bob (B):**

• | Chooses a random integer s_{B} = 2^{15} <p as his secret key: |

• | Compute and sends to Alice: |

**Each entity compute shared master secret-key:**

**Alice: **where shared session; s = s

_{B}s

_{A}= s

_{A}s

_{B}= 33554432;

**Bob:**

Hence the session master secret key:

k = k_{BA} = k_{AB} = g^{s} = 11093904324 (mod p)

Indeed:

s = s _{A}s_{B} = log_{g}y = log_{g} 11093904324 = 33554432 |

The DH method is used for communication between two people and makes use of three keys: two secret-keys (one for each person) and a session key determined by the two people during the course of the conversation. In other words, the conversation starts with the two people using their own keys; they exchange information to determine a session key which is then used for all future messages. It is important to note that Diffie-Hellman algorithm is an excellent tool for key distribution, but cannot be used effectively to encrypt and decrypt messages on the fly independent of the person one communicates with (cf. email communication).

**THE ELGAMAL ALGORITHM**

The algorithm that we will use here is the ElGamal encryption algorithm. Taher ElGamal was the first mathematician to propose a public-key cryptosystem based on the Discrete Logarithm Problem (DLP)^{[4]}. He in fact proposed two distinct cryptosystems: One for encryption and the other for **digital signature** scheme in 1984. Since then, many variations have been made on the **digital signature** system to offer improved efficiency over the original system. The ElGamal public-key encryption scheme can be viewed as Diffie-Hellman key agreement protocol in key transfer mode. Its security is based on the intractability of the discrete logarithm problem (DLP) and the Diffie-Hellman Problem (DHP)^{[3,4]}.

ElGamal encryption algorithm is very similar to the RSA encryption algorithm in the sense that it is a public key algorithm, which utilizes modular arithmetic on large numbers^{[7]}. However, the mathematics involved is slightly more complicated. Let us start by examining what values constitute to both the public and private keys and how to generate them.

The systems parameters consist of a large prime number p and a generator g of the multiplicative group Z^{*}_{p} whose powers modulo p generate a large number of elements, as in Diffie-Hellman method. Let’s assume the two entities Alice (A) and Bob (B) wants to communicate. Alice (A) generates a secrete (private-key) from a randomly chosen large integer number a such that 1≤a≤p-2 and computes her public-key A:

(3) |

Alice’s authentic public-key set is (p, g, A) and private-key (a, p). Note that g must be less than p and relative prime to p. This is essential to the algorithm because if a and p are not relatively prime, then we will have trouble using our mod operation.

**Encryption:** Bob (B) encrypts a message M for Alice (A), which A decrypts.

Suppose Bob wishes to send a plaintext message M to Alice, where, M is an integer in the range [0, p-1]:

• | Bob first obtains A’s authentic public-key ring: (p, g, A) placed in the public key server. |

• | Generates a large random integer b such that 1≤b≤p–2 |

• | Computes his public-key:B = g^{b} (mod p) |

• | He uses DH key exchange protocol which they had agreed a prior to compute the shared masters secrete key: S_{AB} = (A)^{b} mod p = g^{ab} (mod p) |

• | Encrypts the plaintext message M: δ = MAS_{AB} (mod p) |

• | Sends the ciphertext C = {B, δ} to Alice. |

**Decryption:** Upon receiving the ciphertext C, Alice uses her private-key a to compute:

and recovers plaintext message M by computing: (B^{-a}), δ mod p which can easily be proved as follows:

B ^{-a}Aδ ≡ g^{-ab} Mg^{ab} ≡ M(mod p) |

**Implementation of ElGamal encryption with artificially small parameters**

**Key generation: ** Entity A selects the prime p = 12884901893 and a generator g = 12 in Z^{*}_{p} with order p-1 = 12884901892.

• | Alice (A) chooses the private-key: a = 2^{10} = 1024 <p. |

• | Computes:A = g^{a} mod p =12^{1024}(mod p) = 3505577916 |

• | A’s public key ring is: (P, g, A) = (12884901893, 12, 3505577916). |

**Encryption (Bob):** To encrypt a plaintext message M = 352247.

• | Bob (B) selects private key, a random integer: b = 2 ^{15} = 32768<p. |

• | Computes his public key as: B = g ^{b} = 12^{32768} mod p = 9663562615. |

• | He computes the shared master secrete key using DH key exchange protocol as: |

S_{AB} = (A)^{b} = (g^{a})^{b} ≡ (3505577916)^{32768} (mod p) = 11093904324

• | He encrypts ciphertext as: |

δ ≡ MAS_{AB} (mod p) ≡ 553247.11093904324 (mod p) = 9930699416 |

• | B sends C = (B, δ) = (9663562615, 9930699416) to A. |

**Decryption (Alice):** To recover the message M, Alice (A) does the following procedure:

• | Receives ciphertext: C = (B, δ) = (9663562615, 9930699416) |

• | Computes master shared secret key: |

• | Recovers M by computing: where, we note that: |

**Common system-wide parameters:** All entities may select to use the same prime p and generator g, in which case p and g need not be published as part of the public-key. This results in public-keys of smaller sizes. An additional advantage of having a fixed base g is that exponentiation can be expedited via precomputation. A potential disadvantage of common system-wide parameters is that larger modulo p may be warranted. For practical application and security reasons the crypto-keys must of 1024-bit or greater is recommended (Fig. 1).

**DIGITAL SIGNATURE AND AUTHENTICATION**

Authentication is more important than encryption^{[8,26-30]}. Most people's security intuition says exactly the opposite, but it's true. Imagine a situation where Alice and Bob are using a secure communications channel to exchange data. Consider how much damage an eavesdropper (Eve) could do if she could read all the traffic. Then think about how much damage Eve could do if she could modify the data being exchanged. In most situations, modifying data is a devastating attack and does far more damage than merely reading it.

One way to address the authentication problem encountered in public-key cryptography is to attach **digital signature** to the end of each message that can be used to verify the sender of the message^{[7,28]}. The significance of a **digital signature** is comparable to the significance of a handwritten signature. In some situations, a **digital signature** may be as legally binding as a handwritten signature. Once you have signed some data, it is difficult to deny doing so later-assuming that the private-key has not been compromised or out of the owner's control. This quality of digital signatures provides a high degree of nonrepudiation - i.e., digital signatures make it difficult for the signer to deny having signed the data^{[7,8,28]}. This quality is stronger than mere authentication (where the recipient can verify that the message came from the sender); the recipient can convince a judge that the signer sent the message. To do so, he must convince the judge he did not forge the signed message himself! In authentication problem the recipient does not worry about this possibility, since he only wants to satisfy himself that the message came from the sender.

In short, an electronic signature must be a message-dependent, as well as signer-dependent. Otherwise the recipient could modify the message before showing the message-signature pair to the judge. Or he could attach the signature to any message whatsoever, since it is not possible to detect electronic cutting and pasting. To implement signatures the public-key cryptosystem must be implemented with trap-door one-way permutations i.e., have the property (d), since the decryption algorithm will be applied to unenciphered messages^{[7,8,10]}.

ElGamal was the first cryptographer to introduce the basic concept of **digital signature** scheme. The ElGamal signature scheme is similar to the encryption algorithm in that the public-key and private-key have the same form; however, encryption is not the same as signature verification, nor is decryption the same as signature creation as in RSA. The DSA is based in part on the ElGamal signature algorithm^{[7,8]}.

For a long period of time (1985-1996) after the birth of the ElGamal signature scheme and the family of such signatures (e.g., Schnorr and DSS), it was widely believed that the difficulty of factoring such a signature should somehow be related to the discrete logarithm in a large subgroup of a finite field^{[5,10,28]}. However, no formal evidence (formal proof) was ever established until 1996. Poincheval and Stern succeeded in demonstrating affirmative evidence for relating the difficulty of signature forgery under a signature scheme in the ElGamal-family signatures to that of computing discrete logarithm^{[26]}. They do so by making use of a powerful tool: The Random Oracle Model (ROM) for proof of security. The ROM-based technique of Pointcheval and Stern is an insightful instantiation of the general ROM-based security proof technique to proving security for the ElGamal-family signatures.

**Digital Signature Algorithm:** The Digital Signature Algorithm (DSA) was proposed in August 1991 by the US. National Institute of Standards and Technology (NIST) for use in their Digital Signature Standard (DSS) and was later specified in a US Government Federal Information Processing Standards (FIPS 186^{[29,30]}) called the Digital Signature Standard (DSS). It was designed at the NSA as part of the Federal Government's attempt to control high security involving cryptography. Part of that policy included prohibition (with severe criminal penalties) of the export of high quality encryption algorithms. The DSS was intended to provide a way to use high security digital signatures across borders in a way which did not allow encryption. Those signatures required high security asymmetric key encryption algorithms, but the DSA (the algorithm at the heart of the DSS) was intended to allow one use of those algorithms, but not the other. It didn't work. DSA was discovered, shortly after its release, to be capable of encryption (prohibited high quality encryption, at that), however, it is so slow when used for encryption as to be even more than usually impractical.

The US government based their Digital Signature Algorithm (DSA) on much of ElGamal’s work^{[5]} and is the best known example of a large system where the Discrete Logarithm (DL) algorithm is used. Its security is based on the intractability of the Discrete Logarithm Problem (DLP) in prime-order subgroup of Z^{*}_{p}. As with the RSA algorithm, these transformations raise the computational complexity of the problem. The discrete logarithm system relies on the discrete logarithm problem modulo p for security and the speed of calculating the modular exponentiation for efficiency. In terms of computational difficulty, the discrete logarithm problem seems to be on a par with factoring^{[28]}.

**The Mechanics of Digital Signature Algorithm (DSA):** The Signature-Creation Data consists of the public parameter an integer y computed as: y = g^{x} mod p, as per the DLP above. Note that p and q are large prime numbers^{[31]}. When computing a signature of a message M, no padding of the hashcode is necessary. However, the hashcode must be converted to an integer by applying the method described in Appendix 2.2^{[30]}.

The basic idea of DSA is for the signer of message M - that is, the possessor of the value x behind the publicly known, g^{x} mod p- to append a pair of numbers r and s obtained by secretly picking another number k between 1 and q, computing, r = (g^{k} mod p), (i.e., computing g^{k} mod p) and then taking the remainder of that number mod p) and s = k^{-1} (SHA(M)+xr) mod q where, k^{-1} is the multiplicative inverse of, k (mod q) and SHA is the Secure Hash Algorithm^{[8]}. He then sends (M, r, s) to the communicating partner. Another NIST standard, SHA (official acronym is SHA-1) reduces a character string of any length to a 160-bit string of gibberish. In the implementation of DSA, q is a 160-bit prime divisor of p-1 and g is an element of order q in F^{*}_{p}.

The receiver of (M, r, s) from person g^{x} computes, u = s^{-1} SHA(M) mod q and v = s^{-1} r mod q and then checks that ((g^{u})(g^{x})^{v} mod q), equals r. If it doesn’t, then, by elementary number theory, something definitely went wrong. If it does, then, according to NIST, you can safely assume that the message M came from the presumably unique individual who knows the discrete logarithm of g^{x}. Table 1 shows the sequence of DSA scheme.

Table 1: | Digital Signature Algorithm (DSA) |

The security of DSA is based on the assumption that the only attacks are either those that work in the multiplicative subgroup of order q without exploiting any special properties of this group, or else by methods such as index-calculus ones, which work with group modulo p. There is no proof that some algebraic relations could not be exploited to find an improved algorithm.

To-date **digital signature** algorithm remains seemingly secure, until the methods of Shanks and Pollard’s running times can be improved substantially or a more effective algorithm with better running time, to threaten its security. Such an alternative algorithm would not require a subexponential technique to break the DSA. A method that runs in time q^{1/4} (group Q = <q> of prime order) would destroy it. Thus, the DSA seemed to have attained both a high level of security and low signature storage and implementation time. This lack of progress in developing better algorithm capable of breaking DSA has provided a subjective feeling of comfort to crypto-designers and led them to choose a crypto-security parameter close to the edge of what is feasible. However, recently the DSA has been superseded by the ECDSA, which is a similar system based on the group of an elliptic curve rather than a finite field^{[16]}.

**DISCRETE LOGARITHM RELATED PROBLEM**

Let G be a cyclic group and let g be generator of G such that: G = {g^{0}, g^{1},...., g^{n-1}}, where, n = |G| of G. The discrete log function Dlog_{g,G}: G→Z takes input a group element γ and returns the unique α ∈ Z_{n} such that: γ = g^{α}. There are several computational problems related to this function that are used in primitive. In all cases, we are considering an attacker that knows the group G and the generator g, that is, given g^{α} find α(DLP). Alternatively, given g^{α} and g^{β}, find g^{αβ}(DHP).

**Methods for solving DLP:** Recall communicating partners, Alice and Bob. We assume that they have decided in advance to use DH protocol, to form shared master secret key that they can use for secure communication, which is given by: where, and s = s_{A}s_{B} in range [1, n-1], where, n is the order of the group. Clearly, the key exchange systems, is broken as soon Eve, the benevolent eavesdropper, can determine s_{A} from the known k_{A} (or s_{B} from k_{B}). This is the motivation for us to look at various techniques to solve:

(4) |

where, g, r and p are known and s need to be determined. While the eavesdropper who happens to have overhead the exchange and thus knows g, and , will hopefully not be able compute g^{S}. The problem of how to solve, g^{s} ≡ r (mod p), is called the discrete logarithm problem (DLP) (i.e., log_{g} r = s)^{[32,33]}. If the discrete log problem for the group G = <g>, order of group is easy, an eavesdropper can compute either s_{A} or s_{B} and can find out what g^{s} is. It is an important open question whether determining g^{s} knowing just is as hard as the discrete log problem in general^{[34-36]}. However, it is important to note that, a fast discrete log algorithm would definitely destroy the crypto-security and utility of the widely used Diffie-Hellman crypto-protocol. The same threat also affects other crypto-security based on DLP such as ElGamal cryptosystems and **digital signature** algorithm (DSA). This factor has generated huge research opportunity on the complexity of the discrete logs^{[37-39]}.

For real time practical application when implementing DLP-based crypto-algorithm like DH and the likes (where the certificate and signed hash have been added to prevent man-in-the-middle attack), the area of most concern focuses upon the fact that the specifications for generation of shared master secret key and certificate for authentication purposes, fixes the values of p and g. However, one must be careful since under some conditions, the discrete log is easy to compute and, for this reasons the value of p must be chosen carefully. For example, it is easy to compute the discrete logarithm when p-1 has only small prime factors, thereby susceptible to Pohlig-Hellman attack, which has time complexity bounded by the largest prime factor of the group. Therefore, for safe prime, p is usually chosen so that (p-1)/2 is itself prime, say q. Likewise, some care must be given to the choice of g so that the subgroup generated by g is relatively large, but this is usually an easier choice to make than the choice of p. The time complexity is thus same for both methods (one where the generator creates the whole group and the other where the generator generates only the subgroup with q)^{[40]}.

The best known generalized algorithms for solving discrete logarithms are still quite slow, but the majority of time is usually consumed in easily parallizable precomputations about the group (choice of p and g) in general. Once the precomputation is finished, then any discrete logarithm in that group is easily found. In that aspect and in most applications implementing secure network connections-it is known that most connections will be created using the only key exchange algorithm defined in the specification. Hence, the time and expense required to break the majority of secure connections is, therefore, only slightly greater than the time and expense required to perform the aforementioned precomputations for the group specified in that standard. One time-honored rule for security design is that the value of the data being protected should be less than the expense required to break the crypto-security systems.

**GENERAL ATTACKS ON DISCRETE LOGARITHM PROBLEM**

There are many ways Eve could implement to acquire the shared master secret key: one option is she could exploit the weakest link in the crypto-security systems. This could be via many available options, e.g., breaking the underlying crypto-algorithm and which in most cases is harder option to be attempted only as a last resort. Instead the eavesdropper might opt to exploit other weaknesses such as: recovering a key by observing the power consumption or electromagnetic radiation of the crypto-devices; finding vulnerabilities in the crypto-security protocols or simply revert to stealing the key: Through clever interactive social engineering with those trusted to safeguard the keys. In most cases, however, the crypto-algorithms are always the most important core tool in crypto-security applications.

If we assume that Eve’s has no any other alternative available and so must resort to brute-force attack of the core crypto-algorithm, which in this case requires her to solve the underlying DLP, i.e.,

In applying brute-force approach to find, s, from g^{s}, she would have to try: s = 0, 1, 2, .... until a solution is found or, alternatively, to put: g^{0}, g^{1}, g^{2}, .... in a table and look for r. Either way, the complexity is p. If p consists of t bits, we can say that the complexity is given by 2^{t}, so the complexity grows exponentially in t. There are much better methods that she can resort to, which balance the time complexity with available memory, such as is the case with Shank’s Baby-Step Giant Step (BSGS) method. Other methods are Pollard’s rho, Silver-Pohlig-Hellman and index calculus, which we all discuss in this paper. Of all this methods, the index calculus method, a subexponential-time algorithm, offer better performance in cracking DLP based crypto-schemes. In case of the prime fields, there is more advanced version of the index-calculus known as the Number Field Sieve (NFS), to solve the DLP with expected time^{[41]}:

If q = 2^{m}, there is a variation of the index-calculus known as Coppersmith’s algorithm, to solve the DLP with expected time^{[38]}:

for some c<1.587. Note that the DLP is still considered hard in these groups because the runtimes of these algorithms are not bounded by any polynomial in, q. However, the existence of these subexponential-time algorithms means that one must use larger key sizes than if only exponential-time attacks were known. For example, for practical security reasons it is recommended that prime fields, p should have at least 1024 bits (Fig. 1).

**Shank’s Baby-Step Giant-Step (BSGS):** Suppose that one has enough memory available to store m elements of Z_{p}. Then the Shanks algorithm gives us an efficient method to balance the time complexity with the available memory to solve the discrete log: The method require one to compute sort this element in a list to allow for easy look-up table, where incase of the baby steps (BS) the exponents increase by 1. Next check if the r is in the table, if not, then one checks if r/g^{m} is in the table, if not check for r/g^{2m} and continue, Giant Steps (GS). When r/g^{im} is in the table, say it equals one has found the unknown exponent s = im+k. The time complexity of the baby-step method is p/m, so the product of memory requirement and time complexity is still p≈2^{t}.

**Shank’s Baby-Step Giant-Step (BSGS):** **System requirement:** one has enough memory available to store m elements of Z_{p}.

**Input:** A finite group G = <g> of order n and g, y ∈ G.

**Output:** The discrete logarithm of y to the base g.

**Procedure:**

• | Compute the ceiling of square root |

• | Construct a table T of (i, g^{i.m}) pairs, 1≤i≤m and sort by the second component in the pair. |

• | Compute y.g^{j}, starting at j = 0 . ((y.g^{j}) might be equal to g^{x+1} for some x and j). |

• | Compare yAg^{j} with g^{i.m} entries in T. If a match is found then g^{i.m} = g^{x+j} and thus x = iAm-j, is the desired discrete log. If there is no such match, then try another value of j, with j≤m |

**Time complexity:** and Space complexity .

As an example, let Then g = 2 is the generator of G with order of n = p-1 = 138. We look for the discrete logarithm of y = 43 to the base g.

**Implementing BSGS:** First let’s compute m = which we use to construct table T of (i,g^{i.m.}) pairs, for 1=i =1 2

Comparing the two table entries, we can observe that we have a match in the entries for i = 10 and j = 5, which gives: i.e., the discrete logarithm of 43 to the base 2 in Z^{*}_{139} is x = 120-5 = 115. Hence,

In Shanks’ algorithm, the checking for matching from the two sorted lists of m entries each can be done in linear time (assuming that the representations of elements are compact enough). Hence, the running time of the algorithm is dominated by the arithmetic required to compute the two (lists) tables and the time to sort them. Further, Shanks’ algorithm is also known to be deterministic. Therefore, if one is willing to give up on determinism, one can replace sorting by hashing, thereby speeding up the process. On the other hand, there is no easy way to reduce space requirements (other than by increasing the running time), which are order m: |<g>|^{1/2[42]}. There are other general algorithms for discrete log problem that run in time O(|<g>^{1/2}|) and very little space, both methods are randomized and are due to Pollard^{[13]}, which we discuss next.

**Pollard’s Rho (ρ) method for solving DLP:** Basics Idea-Pollard’s Rho algorithm is based on the birthday paradox^{[43,44]}. If we randomly choose elements (with replacement) from a set of N numbered elements, we only need to choose about elements until we get one element twice (called a collision). This can be applied to find discrete logarithms as follows. By choosing a, b ∈_{R} [0, N-1], one obtains a random group g^{a}h^{b}. Such group elements are randomly selected until we get a group element twice. If represent the same group element then whence:

(5) |

The Pollard’s rho method has an expected storage requirement given by where, T is the random variable describing the number of group elements chosen until the first collision occurs. The main thrust of Pollard’s rho method, is then how to detect collision without the need to store group elements. The collision in this method is done by means of a random function: G → G For actual implementations, f is chosen such that it approximates a random function as closely as possible.

The originally suggested function by Pollard (for Z^{*}_{p}) can be generalized towards arbitrary cyclic groups given by:

(6) |

where, S_{1}, S_{2} and S_{3} are three sets of roughly the same size which form a partition of G. However, Teske^{[39,40]} has shown that the Pollard’s function f is not random enough and gives alternative and better function as:

(7a) |

where, again M_{s} are roughly of the same size and form a partition of G, which this time is partitioned into more than three subsets. For both functions, it is of course necessary that determining the subset M_{i} and S_{i}, respectively, to which a group element belongs is very efficient.

By starting at a random point and iteratively applying a random function, random points are generated. Because the group is finite, we eventually arrive at a point for the second time (i.e., a collision occurs), which happens after expectation E(T), thereby, the sequence of subsequent points then cycle forever. With very little time and space overhead, it is possible to detect such a cycle with Floyd’s cycle-finding algorithm (or with an improved variant by Brent^{[45]}).

**The Mechanics of Pollard’s Rho (ρ) method Input:** A finite group G = <g> of order n and g, y ∈ G.

**Output:** The discrete logarithm of y to the base g.

**Procedure:**

• | Break the set G into three approximately equal-sized sets S_{1}, S_{2} and S_{3}. |

• | Define a sequence of elements over G: x_{0}, x_{1}, x_{2},.... as:x _{0} = 1 |

(7b) |

• | This sequence, in turn, defines two other integers sequences a_{i}, b_{i} such that |

a_{0} = b_{0} = 0

(8) |

(9) |

• | Cycle-finding calculation: The next step is to find a pair (x_{i}, x_{2i}) with x_{i} = x_{2i.} |

In this case:

(10) |

As long as the discrete log can be calculated as above. In the rare case that a collision x_{i} = x_{2i} is not found, or that b_{i}≡b_{2i} modn, the procedure can be repeated by selecting random a_{0}, b_{0}∈[1,n –1]and restarting with

Time-complexity: Space-complexity: O(1)

**Implementing Pollard’s Rho for solving DLP:** As an example, let H = Z^{*}_{383}. Then g = 2 is a generator of the subgroup G of Z^{*}_{383} of order n = 191. Suppose y = 228. Partitioning G into three sets according to the rule: x ∈ S_{1} if x ≡ 1 mod 3; x ∈ S_{2} if x ≡ 0 mod 3 and x ∈S_{3} if ≡2 mod 3 and setting x_{0} = 0; a_{0} = 0 and b_{0} = 0 results:

The calculations show that: x_{14} = x_{28} = 144.

Computing:

Results:

Indeed:

**Pollard’s Lambda (λ) method:** Pollard’s lambda method is known as Method for catching Kangaroos.

**Input:** A finite group G = <g> of order n; g, y ∈ G and value w standing for the size of an interval in which the discrete logarithm lies, e.g.,

**Output:** The discrete logarithm of y to the base g.

**Procedure:**

• | Compute two sequences T and W (called Kangaroo trails). The T sequence is where: |

Actually, this will occur if W’s trail hits any point

T’s trail begins at y’_{0} = g ^{B} gand proceeds till Note that

where, is the distance from y’_{0} y’_{i}

W’s trail begins at y_{0} = y = g^{x}. The search ends when for y_{M} in W’s trail, at which point the discrete logarithm is calculated as:

• | If no collision (i.e., ) occurs (the probability of which can be controlled) before d_{M} exceeds B+d_{N}-A = w+d_{N}, then the limit is terminated (failure). W has traveled beyond the trap. Subsequent iterations of the above sequences can be run as required. |

**Time-complexity:** and Space-complexity: O(log w). The Shanks method and the Pollard’s kangaroo method can be used to compute the discrete log of y in about steps when this discrete log is known to lie in an interval of length at most m. Hence, crypto-designers have to be careful not to limit the range in which discrete logs lie.

To-date the running times of Pollard and Shanks algorithms have not been improved to any substantial. This has since led to the assumption that in the absence of other structure in a cyclic group G = <g> of prime order, it will require on the order of |G|^{1/2} operations to compute a discrete log in G. Many of the modern asymmetric key cryptosystems based on DLPs, such as DSA^{[2,8,9]}, rely on Schnorr method^{[10]}, which reduces the computational burden normally imposed by having to work in a large finite field by working within a large multiplicative subgroup Q = <q> of prime order. With an assumption that the discrete log in Q cannot be solved much faster than steps. For q of order 2^{160}, as in DSA, this is about 10^{24} group operations. Since group operations are typically considerably more intensive than the basic instruction of ordinary computers (^{[46]} for the case of ECC), it is reasonable to estimate that 10^{24} group operations might require at least 10^{26} ordinary computer instructions. A mips-year (MY) is equivalent to about 3A10^{13} instructions, so breaking DSA, say, with Shanks or Pollard algorithms would require over 10^{12} MY, which appears to be adequate for a while at least. Several crypto-researcher, including Richard Crandall and Len Adleman, have observed that all the instructions executed by digital computers in history are on the order of Avogadro’s number, about 6A10^{23}. The largest factoring projects so far have used 10^{17} operations and other large number distributed projects have accumulated on the order of 10^{20} operations (Table 2).

Table 2: | Computing power available for integer factorization (in MY) |

**The Pohlig-Hellman Algorithm:** Here we present the Pohlig-Hellman algorithm for computing discrete logarithms^{[47]}. If G is not simple, it will reduce a DLP in G to several DLPs in smaller groups. It thus restricts the possible choices of groups in which the DLP is maximally hard. The idea is to take advantage of many-to-one homomorphic images in which the DLP is easier to solve. This will be important in the sequel because generalizations suggest that, even if one considers structures other than groups of the DLP, one should consider using simple structures. Specifically, using simple structure is the most reliable to avoid similar attacks^{[2,47,48]}.

**The mechanics of Pohlig-Hellman Algorithm **

**Input:** A finite group G = <g> of order n; g, y ∈ G.

**Output:** The discrete logarithm of y to the base g.

**Procedure:**

1. | Let the factorization of n be | |

2. | For i = 0 to r compute the decomposition | |

3. | Set | |

4. | Compute h_{j} for j = 0 to e_{i}-1 do compute (e.g., using Pollard’s rho method) set | |

5. | Use the CRT to compute the discrete log x with 0≤x≤n-1 and with 1≤i≤r. | |

6. | Output x. |

Note that there are several well-known algorithms for performing step 5 in polynomial time. The idea is that in step 1 one can find 2, one can find This is further reduced by finding the base-p expansion of x_{i} one digit at a time. Observe that each time step 4 is reached, has order p_{i}. Thus, one need only compute discrete logarithm in subgroups of order p_{i}. Unless there is some trivial way to solve the DLP in G, this is more efficient than computing one discrete logarithm in the full group with order n. It is thus desirable that one should choose G with prime order so this algorithm yields no reduction at all.

In general, we observe that if G is not simple, then there exist groups G_{i} and homomorphisms of the form, f_{i} : G → G_{i} with trivial kernels. One may then solve the corresponding DLP in each homomorphic image. Furthermore, if there exist such G_{i}, such that:

is a monomorphism, then solving the DLP in each G_{i} solves the DLP in G up to an application of the Chinese remainder theorem.

**Time complexity:** provided the factorization of n is given.

**Space complexity:** O(1) if used in conjunction with Pollard’s rho method, for example.

**Implementing Pohlig-Hellman Algorithm:** Let G = Z^{*}_{271}, g = 43 a generator of Z^{*}_{271} of order n = 270. Let y = 210. The discrete logarithm is computed as follows:

• | The prime factorization of n = 270 = 2•3^{3}•5. |

• | (Compute x_{i} ≡ xmod2) |

• | (Compute x_{3} ≡ xmod5) |

• | Compute Then h _{1} is found as: h_{1} = log_{28}242 = 2 compute: and Then h _{2} is found as: h_{2} = log_{28}1 = 3 Hence: x _{2} = h_{0} + h_{1} . 3 + h_{2} . 3^{2} = 35 |

• | Solving the triple of congruences: |

X ≡ 1(mod2) | X ≡ 2(mod5) | X ≡ 35(mod27) |

we use CRT, which gives us the X ≡ 197(mod270)

such that x = log_{43}210 = 197

Indeed: 43^{197} ≡ 210(mod)271

**INDEX-CALCULUS METHOD**

Over finite fields where the DLP is defined, there is another additional structure beyond the multiplicative structure. The index-calculus methods take advantage of this extra structure^{[13,49]}.

The index calculus method is sub-exponential algorithm for solving the DLP over a finite group G, i.e., given g, y ∈ G, g a generator, we seek to find a value β ∈ Z/(|G|)Z satisfying y = g^{β}. We say that β = log_{g} y or β = ind_{g} y, for the index of y in g.

The basic idea, which goes back to Kraitchik^{[50]}, is that if:

(11) |

for some elements of GF(q)*, then:

(12) |

If we collect many equations of the above form (with at least one of them involving an element z such as g, for which log_{g} z is known) and they do involve many x_{i} and y_{j}, then the system can be solved. This is similar to the situation IFP^{[51]}. Progress in index calculus algorithms has come from better ways of producing relations that lead to equations such as Eq. (11).

Index Calculus method involve forming relations in what is called a factor basis, F = {p_{1}, p_{2}, ...., p_{1}}, which is formed by choosing a list of the first v primes from the order n of the finite group G. How many we choose is critical. The factor basis, F consists of all integers whose prime factors are all less than or equal to the largest prime on our list.

Once the number of relations found equals the number of prime factors of the elements in the factor basis we can solve the system of equations to recover the discrete logs of these primes. From these logs we can then, with more computation, recover the discrete log of any chosen element.

In determining the discrete logs of the elements, we start by looking at the exponentiations of the generator: g, g^{2}, g^{3}, ...., mapping these values to the integers if the field happens to be F_{p} instead of Z_{p}, which we can write in product form:

where, p_{i} ∈ F random integer k is such that 1≤k≤p-1. Compute: g^{k} ∈ G. If any value of g^{k} is in F, we record it and following relations. We can derive from g^{k}’s factorization into powers of their first 1 primes and the fact that Z_{p} has order p-1.

(13) |

where, p_{i} is the i^{th} prime from our factor base and e_{i} is its corresponding exponent in the factorization of g^{k}. We continue computing powers of the generator until we obtain v independent relations. We solve these equations (which are typically sparse) to get the discrete log of each of the v primes. Now, to find the discrete log of y ∈ F_{p} we compute the quantities y, y.g, y.g^{2}, .... and lift these as well to Z if needed. We continue the computation until we find an element, y.g^{β}, that factors completely using our factor basis such that:

and taking logs on both sides, gives:

(14) |

where, e_{i} is its corresponding exponent to the i^{th} prime. We already know the discrete log of each of the v primes, so we can just solve this equivalence for, log_{g} y. Here, we get a runtime that is subexponential in p provided we make a good choice for ^{[52]}

How can we subvert this attack? We can use really large keys as previously discussed, but this has significant drawbacks. The system becomes even slower with larger keys making it undesirable. Additionally, large keys require more computation space so such a cryptosystems cannot fit on small form, constrained environment wireless devices that also need to utilize encryption e.g., smartcards. Since larger keys do not seem to offer much of a fix, we need to consider other alternative cryptosystems like ECC which make use of ECDLP instead of the classical DLP, for more detail on ECDLP^{[18-20]}.

**The mechanics of index-calculus Input:** A finite G = <g> of order n and g, y ∈ G

**Output:** The discrete logarithm of y to base g

**Procedure: **

• | (Factor Base) find a subset F = {p_{1}, p_{2}, AAAA, p_{1}} of G such that a significant fraction of all elements in G can be efficiently expressed as a product of elements from F. |

• | (Linear Relations) |

• | Select a random integer k, 0≤k≤n-1 and compute g^{k}. |

• | Try to write g^{k} as a product of elements from F: |

(15) |

• | If Eq. 15 is successful, then taking the discrete log of both sides of Eq. 15 results in the linear relations: |

(16) |

• | Collect (t+c) linear relations like Eq. 16, by choosing other random k’s |

• | (Discrete Logs of elements F) working modulo n, solve the linear system of (v+c) equations in v unknowns and obtain the value of the discrete logarithms of the factor base: log_{g} p_{i}, 1≤i≤v. |

• | (Actual computation of the discrete log) select a random r, 0≤r≤n-1 and compute y.g^{r}. |

Try to write y.g^{r} as a product of elements of F: |

(17) |

• | If y.g^{r} is not repsentable as (17) then choose another r and retry the factorization (17). Otherwise, taking discrete logs of both sides of (3) results: |

(18) |

**Time-complexity:**

• | L_{p} for G = Z^{*}_{p} using NFS-variant of index calculus. |

• | for G = GF(2^{m}) and c<1.587 |

**Implementation of index-calculus method:** To help us under the power of cracking DLP based cryptosystems, we will use Index Calculus to recover the master shared secret key we found using DH key exchange crypto-protocol. Recall too, that Alice and Bob used the same key to exchange secure data using ElGamal crypto-algorithm. Suppose Eve at her non-descriptive hideout seeks to recover the key, so she needs to solve:

g^{s} ≡ 11093904324 (modp) |

where, s(= s_{A}s_{B}) is the shared key, g = 12 is the generator

and p = 12884901893 is the prime integer.

We start with a precomputation of the right-hand side, i.e., 11093904324. We consider our factor base which consist of the first 15 prime numbers:

F = {p_{1}, p_{2},ÿ, p_{15}}

= {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47}

and try to solve the logarithm problem for the all elements in the factor base. In other words, we want to solve:

Now select a random exponent integer s and compute 12^{s} (mod p) and check if the residue can be factored completely by means of the factor base. For example:

12 ^{422} ≡ 12084561537 ≡ 3A17A23A10302269 (mod p) |

where, we observe that 10302269 cannot be factored further over our factor base F. We also recall that the larger the factor base, the easier we find the residue that do completely factor with respect to the factor base, but the price we pay is having the more unknowns k_{i} completely factor with respect to our factor base, we end up with linear relation between the unknown k_{i}’s, e.g.,:

12 ^{9625853812} = 4616675 =5^{2}A7A23A31A37 (mod p) |

gives the relation:

9102048310 = 2Ak _{1}+k_{2}+k_{3}+k_{4}+k_{5}+k_{6}+k_{7} (mod p-1) |

where, each k_{i}’s is given by, k_{i} = log_{g} p_{i} (mod p).

Next we collect enough relations to enable us solve the unknown k_{i}’s, e.g.,:

12 12 12 12 12 |

where, we get the following linear relations:

9102048310=2k 7382258392=k 12298253855=k 4835947410=k 6525201043=k |

The solutions are given as follows:

k k k k k k k k k k k k k k k |

where, all k_{i}’s are computed modulo p-1.

Finally, we are ready to solve our original problem: 12^{s}≡11093904324 (mod p). Pick a random exponent s and combining with y and then check if, yAg^{s} (mod p), completely factors over our factor base. After a few tries we get lucky, such that:

y.g 19050076 =2 |

log _{g} y+s ≡ 2k_{1}+3k_{2}+k_{5}+k_{11}+k_{15} (mod p-1) |

log |

→Ylog_{g} y = log_{12} 110930904324 = 33554432

Indeed: 12^{33554432} ≡ 11093904324 (mod p)

Time complexity of the Index Calculus is given by, exp (1.923t^{1/3} (in t)^{2/3})^{[53]}. The memory requirement typically equals the square root of this the time complexity.

Table 3: | The complexity of different methods to take discrete logarithms for p≈2^{t} |

This makes the index calculus algorithm and variants, the only method for taking discrete logarithms with subexponential complexity, Table 3.

**THE FUTURE STATE OF THE ART IN IFP AND DISCRETE LOGARITHM COMPUTATION**

The basic question at the onset for implementer of the DLP-based crypto-algorithms is how large discrete log problems that can be handled by current available state of the art computational tools? For a conservative estimate, it is appropriate to warn that to obtain a proper estimate of discrete log problems it is better to consider what has been accomplished in Integer Factorization Problem (IFP). Much more effort has been devoted to IFP than to discrete logs and most of the leading algorithms are similar. Thus, although discrete logs in prime fields do appear harder than factoring integers of the same size, it is prudent to disregard this difference when choosing crypto-security implementation. Number Field Sieve (NFS) is currently at the cutting-edge of research into integer factoring algorithm capable of factoring large composite numbers over 100 digits^{[54]}. The current record in factoring a generally hard integer is that of the 200 decimal digits challenge integer from RSA Data Security, Inc., RSA-200, which was accomplished with general number field sieve (GNFS) was factored on May 9, 2005 by Bahr, Boehm, Franke and Kleinjung^{[55]}. Among the Cunningham integers, the record is the factorization of 248 decimal digit integer by Special Number Field Sieve (SNFS) was factored by Aoki, Kida, Shimoyama, Sonoda and Ueda (CRYPTREC) on April 04, 2004^{[56,57]}.

Computing power is measured in MIPS-years: a million-instructions-per-second computer running for one year or about 3x10^{13} instructions. A 100-MHz Pentium III is about a 50-MIPS machine; a 1600-node Intel Paragon is about 50,000 MIPS. In 1983, a Cray X-MP supercomputer factored a 71-digit number in 0.1 MIPS-years, using 9.5 CPU hours. That's expensive. Factoring the 129-digit number in 1994 required 5000 MIPS-years and used the idle time on 1600 computers around the world over an eight-month period. Although it took longer, it was essentially free.

Fig. 2: | Comparison of security levels of ECC and RSA/DSA |

These two computations used what's called the quadratic sieve , but a newer, more powerful algorithm has arrived. The general number field sieve is faster than the quadratic sieve for numbers well below 116 digits and can factor a 512-bit number over 10 times faster-it would take less than a year to run on an 1800-node Intel Paragon. Figure 2 for current security level involving MIPS-years estimation featuring public keys.

Odlyzkol in paper^{[58]} argues that, given the record of improvements in the index calculus algorithms, it seems imprudent to assume that the current version of GNFS is the best that will be available for a long time. At the least, it seems a reasonable precaution to assume that future algorithms will be as efficient as today’s SNFS, in which even 1024 bit RSA moduli might be insecure for anything but short-term protection.

Therefore, the baseline and trade-off, is the size of n should be chosen such that the time and cost for performing the factorization exceeds the value of the secured/encrypted information. But even then, great care must still be taken in the overall crypto-design, as current development in integer factorization have gone much faster than foreseen and it is a precarious matter to venture upon quantitative forecasts in this field. Moreover, one should realize that it always remains possible that a new computational method could be invented from unsuspecting quarter, which makes factoring easy (e.g., quantum computing, if an operative quantum computer were to be realized in the not-so distance future)-fortunately or unfortunately depending on which side you are on-no one knows how to build one yet! Shor^{[59,60]} showed that if such machine could be built, integer factorization and discrete logs (including elliptic curve discrete logs) could be computed in polynomial time. This result has stimulated an explosion in research on quantum computers^{[61]}. The one comforting factor is that all experts agree that even if quantum computers are eventually built, it will take many years to do so (at least for machines on a scale that will threaten modern public key cryptosystems) and so there will be advance warning about the need to develop and deploy alternative crypto-algorithms.

**CONCLUSIONS**

We have discussed a method for implementing a public-key cryptosystem whose security rests in part on the difficulty of solving discrete logs. If the crypto-security designs and methods are appropriately implemented, it permits secure communications to be established without the use of courier to carry keys and it also permits one to sign digitized documents.

In general, the strength of encryption is related to the difficulty of discovering the key, which in turn depends on both the cipher used and the length of the key. No matter which technique you choose, you must keep in mind that a desperate cryptanalyst can always decipher the message. Hence, you should always take all the necessary precautions to protect your data. Those precautions range from proper choice of cryptographic keys to physically protecting your assets and yourself.

In overall the baseline and trade-off, is the size of n and/or prime p should be chosen such that the time and cost for cracking the crypto-systems exceeds the value of the secured/encrypted information. But even then, great care must still be taken in the overall crypto-design, as current development in integer factorization and equivalently solving DLP, have gone much faster than foreseen and it is a precarious matter for one to venture upon quantitative forecasts in this field. Moreover, one should realize that it always remains possible that a new computational method could be invented from unsuspecting quarter, which makes factoring ‘easy’ (e.g., quantum computing). Today, the wise crypto-designer is ultraconservative when choosing key lengths for a public key cryptosystems. He must consider the intended security, the key’s expected lifetime and the current state of art in factoring or equivalently solving DLP.

####
**REFERENCES**

- Elgamal, T., 1985. A public-key cryptosystem and a signature scheme based on discrete logarithms. IEEE Trans. Inform. Theory, 31: 469-472.

Direct Link - Schnorr, C., 1991. Efficient signature generation by smart cards. J. Cryptol., 4: 161-174.

CrossRefDirect Link - Rueppel, N.K., 1996. Message recover for signature schemes based on the discrete logarithm proble designs. Designs Codes Cryptography, 7: 61-81.

Direct Link - Rivest, R.L., A. Shamir and L. Adleman, 1978. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM., 21: 120-126.

CrossRefDirect Link - Rabah, K., 2005. Theory and implementation of elliptic curve cryptography. J. Applied Sci., 5: 604-633.

CrossRefDirect Link - Pointcheval, D. and J. Stern, 2000. Security arguments for digital signatures and blind signatures. J. Cryptol., 13: 361-396.

Direct Link - Gordon, D.M., 1993. Discrete logarithms in
*GF*(*p*) using the number field sieve. SIAM J. Discrete Math., 6: 124-138.

CrossRef - Pohlig, S. and M. Hellman, 1978. An improved algorithm for computing logarithms over
*GF(p)*and its cryptographic significance. IEEE Trans. Inform. Theory, 24: 106-110.

CrossRefDirect Link - Teske, E., 1998. Speeding up Pollard's rho method for computing discrete logarithms. Proceedings of ANTS III-The 3rd International Symposium Algorithmic Number Theory, LNCS Vol. 1423, Jun 21-25, 1998, Springer-Verlag London, UK., pp: 541-554.

Direct Link - Hastad, J. and M. Naslund, 1998. The security of individual RSA bits. Proceedings of 39th Annual Symposium on Foundations of Computer Science, Nov. 8-11, IEEE Society, USA., pp: 510-519.

Direct Link - Van Oorschot, P.C. and M.J. Wiener, 1996. On Diffie-Hellman Key Agreement with Short Exponents. In: Advances in Cryptology-EUROCRYPT`96, Maurer, U. (Ed.). LNCS. Vol. 1070, Springer-Verlag, Berlin Heidelberg, ISBN-13: 978-3-540-61186-8, pp: 332-343.

Direct Link - McCurley, K.S., 1990. The discrete logarithm problem. Proc. Symp. Applied Math., 42: 49-74.

Direct Link - Harasawa, R., J. Shikata, J. Suzuki and H. Imai, 1999. Comparing the MOV and FR reductions in elliptic curve cryptography. EUROCRYPT, 1592: 190-205.

CrossRef - Shor, P.W., 1997. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput., 26: 1484-1509.

Direct Link - Shor, P.W., 1994. Algorithms for quantum computation, discrete logarithms and factoring. Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Nov. 20-22, Santa Fe, NM, USA., pp: 124-134.

CrossRefDirect Link - Maurer, U.M., 1994. Towards the equivalence of breaking the diffie-hellman protocol and computing discrete algorithms. Proceedings of the 14th Annual International Cryptology Conference on Advances in Cryptology, LNCS Vol. 839, Aug. 21-25, Springer-Verlag, London, UK., pp: 271-281.

Direct Link - Pointcheval, D. and J. Stern, 1996. Security Proofs for Signature Schemes. In: Advances in Cryptology-EUROCRYPT`96, Maurer, U. (Ed.). LNCS. Vol. 1070, Springer Verlag, Berlin Heidelberg, ISBN-13: 978-3-540-61186-8, pp: 387-398.

Direct Link