INTRODUCTION
Discrete logs have a long history in number theory. Initially, they were used primarily in computations in finite fields (where they typically appeared in the closely related form of Zech’s logarithm). However, they were rather obscure, just like Integer Factorization Problem (IFP). Unlike the latter, they could not even invoke any famous quotes of Gauss^{[1]} about their fundamental importance in mathematics. The status of discrete logs started to grow in the 20th century, as more computations were done and as more thought went into algorithmic questions. It appears that they started to play an important role in cryptography already in the 1950s, long before public key systems appeared on the scene, as cryptosystems based on shiftregister sequences displaced those on rotor machines^{[2]}. Discrete logs occur naturally in the context as tools for finding where in a shift register sequence a particular block occurs. The main impetus for the intensive current interest in discrete logs, though, came from the invention of the DiffieHellman method^{[3]}.
The most important tool necessary for the implementation of publickey cryptosystems is the Discrete Log Problem (DLP). Many popular modern cryptoalgorithms base their security on the DLP^{[1,2]}. Based on the difficulty of this problem, DiffieHellman^{[3,4]} proposed the wellknown DiffieHellman key agreement scheme in 1976. Since then, numerous other cryptographic protocols whose security depends on the DLP have been proposed, including: the ElGamal encryption and signature scheme^{[5]}, the US government Digital Signature Algorithm (DSA)^{[69]} is perhaps the best known example of a DLP system, the Schnorr signature scheme^{[10]} and the NybergReuppel signature scheme^{[11,12]}. Due to interest in these applications, the DLP has been extensively studied by mathematicians for the past 20 years. The mathematical challenge here lies in computing discrete logarithms in finite fields of type Z_{p}, which consist of the integers modulo a large prime p. Although this problem can be considered difficult, there are known subexponential time algorithms for solving it, such as the, index calculus^{[12]} and Number Field Sieve (NFS)^{[13]}. In practical terms, subexponential time means that a determined hacker with enough processing power can break the system in a few months.
THE MECHANICS OF DISCRETE LOG PROBLEM
In an (abelian) group G^{*} (multiplicatively written) we can consider
the equation y=x^{n}, x, y ∈ G, n ∈ Z. If x and y are known
real numbers and it is also known that x is some power (say, n) of y, then logarithms
can be used to find n(″=log_{x}y″) in an efficient manner
(here n is known as the index of y ∈ G). However, if x and y are given
such that: y= x^{n} = x A x .... x (ntimes), then in general it is
technically much harder and hence the determination of n cannot be carried out
in a reasonable amount of time. This is equivalent to the wellknown real logarithm;
we call n the discrete logarithm of y related to the base x^{[10]}.
The operation exponentiation x→y:=x^{n} can be implemented as a
quick, efficient algorithm. As an example, the exponentiation computation of
5^{e}, can be performed pretty efficiently by using binary expansion
of the exponent e, for example, for e = 4369_{10} = 1000100010001_{2},
we have:
The DLP can also be implemented in modular arithmetic form and can be defined as follows: given a prime p, a generator g of Z_{p} and a nonzero element γ ε Z_{p}, find the unique integer k, 0≤k≤p–2, such that γ = g^{k}mod p. The integer k is called the discrete logarithm of γ to the base g (which we can write smartly as follows: b^{e} = r(mod p); b^{e} = r and finally e = log_{b}r, where, b is the base, e is the exponent and r is the residual mod p). Here, Z_{p} denotes the set of integers {0, 1, 2,AAAA, p–1}, where, addition and multiplication are performed modulo p. It is wellknown that there exists a nonzero element g ∈ Z_{p} such that each nonzero elements in Z_{p} can be written as a power of g; such an element g is called a generator of Z_{p}.
Similarly, we can perform a modular exponentiation easily, for example, the computation of 5^{e} mod p, can be carried out efficiently: after each squaring or after each multiplication by 5 reduced modulo p and then continue. That is, if p = 5779, then, 5^{e} mod p = 4720.
On a similar note, we can easily solve, 5^{e} ≡ 2437 (mod 5779), which is equivalent to determining: e = log_{5} 2437 in Z_{5779} and there is no known method with a similar low complexity. That is, it is easy to take logarithm but not a modular/discrete logarithm. This is the backbone of the cryptoalgorithm based discrete logarithm problem^{[1]}.
As an example, let’s solve: b^{4369} ≡ 2437(mod 5779). To solve our DLP, we note that p = 5779 is indeed a prime number, so that by Fermat little theorem: a^{p}^{1} ≡ 1(mod p) for a ∈ (Z/nZ)^{*}, we have: b^{5778} ≡ 1(mod 5779). It follows that, if we raise both sides of our equation to the power of 529 (i.e., the multiplicative inverse: e^{1}(mod p1) = (4369)^{1} mod 5778 = 529), we find the solution of our DLP: b ≡ r^{1/e}(mod p) ≡ 2437^{529} (mod 5779). = 2249.
In general, if the modulus of DLP is replaced with a product of two primes, then finding the solution becomes naturally infeasible for large moduli, simply because the factorization of large integer number is infeasible. This is the backbone on which the security of public key like RSA cryptosystems^{[7,14]}. In RSA public key cryptosystem, for example, Bob’s public key is (e, n) and his private key is (d,n) where, n is the product of two prime numbers p and q (i.e., n(= pAq)) such that: ed = 1(mod(p1)(q1)). The product, n, is the modulus, e is the public exponent and d is the secret exponent. To encrypt a plaintext message M for Bob, Alice has to compute ciphertext: C = M^{e}(mod n). Bob can decrypt C by computing: (C)^{d} = (M^{e})^{d} = M(mod n) = M. No one except Bob can decrypt C since d is only known to Bob. The RSA cryptoalgorithm can be broken by factoring n into p and q. If n is factored then (p1)(q1) can be found and from this d can be computed. Hence, any adversary that factors n can find the privatekey d ≡ e^{1} (mod (p1)(q1)) and with it decrypt any encrypted message^{[7,14]}. Therefore, the algorithm is secure only if the factorization of the carefully chosen sufficiently large two prime numbers requires a superpolynomial amount of time with respect to the size of the number. The key question is, therefore, how large is sufficiently large to make this recovery virtually impossible? In the 1980s it was generally held that prime numbers of a fifty odd digits (i.e., 10^{50}) would suffice. Currently, you need a 1024bit number to get the same security you got from a 512bit number in the early 1980s^{[13]}. If you want your keys to remain secure for 20 years, 1024 bits is probably too short (Fig. 1)^{[16]}.
Technical advantages between DLPbased cryptoalgorithms and RSA is such that in many cases where algorithms of comparable functionality exist, say one over the finite field of integers modulo a prime p and another using composite integer n of the same size, breaking the discrete log modulo p appears to be somewhat harder than factoring the integer n. For purposes of key generation, many people prefer DH algorithm over RSA, since the session key it generates is evanescent. In the simplest application of RSA to key generation, as seen above, Alice creates a session key and transmits it to Bob using Bob’s public key. An eavesdropper who can coerce Bob, via clever social engineering, into revealing his private key can recover the full text of the communication exchanged between Alice and Bob. On the other hand, if Alice and Bob use DH key exchange protocol to generate the session key, destroy it after the session ends and do not store their communication, then neither coercion nor cryptanalysis will enable the eavesdropper to recover what information was exchanged. It is widely believed that the DSA is based on the discrete logs because it is harder to use it for encryption than if it were based on RSA (and thus on Integer Factorization Problem, IFP).
Standard DLP cryptosystems are based on multiplicative groups with the main
operation of exponentiation.

Fig. 1: 
Proposed the minimum key sizes (in bits) to be regarded as
safe for RSA and ECC 
The corresponding problem in additive (i.e., abelian)
groups is: given P and kP = Q (P added to itself k times), find the integer
k (i.e., find k = log_{P} Q). This is much more difficult! There is no onestep operation like taking logarithms
that we can use to get the solution. So we may know P and kP and yet not be
able to find k in a reasonable amount of time. This is called the Discrete Log
Problem for abelian groups. We could always repeatedly subtract P from kP till
we get 0. But if k is large, this will take us a very long time! Several important
cryptosystems are based on the difficulty of solving the DLP over finite abelian
groups. The solution is even tougher if the underlying group arises from an
elliptic curve over a finite field^{[16]}.
In 1985, Victor Miller (IBM^{[17]}) and Neal Koblitz (University of Washington)^{[18]}, independently realized that the DLPadditive group associated with elliptic curve can be used for similar classical DiffieHellman (DH) publickey exchange method. In the same year Scott Vanstone and coresearchers, realized that the invention by Miller and Koblitz was not just a mathematical generalization of the original DH idea, but that some aspect of it could lead to a very promising alternative public key cryptosystems that could be put into real practical applicationthis lead to the formation of what is today, Certicom^{[19]}. In Elliptic Curve Cryptography (ECC)^{[16]}, the multiplicative group is replaced by the additive group of elliptic curve points and exponentiation operation by scalar multiplication of a point (i.e. calculation of g^{k} = gAg....g (ktimes) for a generator g of a multiplicative group is replaced by calculation of [k]P = P+P+.....+P (ktimes) for a generator point P of an additive group of elliptic curve points). Thus, the computational performance of cryptographic protocols based on elliptic curves strongly depends on efficiency of the scalar multiplication. Further, Elliptic curve cryptosystems (ECC) appear to offer the possibility of using much smaller key sizes than would be required by RSAtype cryptoalgorithms of comparable security (Fig. 1). For more detail on the implementation of ECC^{[1619]}.
Time complexity of DLP: Suppose G = Z^{*}_{p} with p a 200bit prime. Using 2^{20} computers, each one running at 400 MHz and assuming that one exponentiation (b^{e} mod p) takes only one machine cycle! Then 400A2^{20} ≈ 2^{29} exponentiation can be made per second per computer. As there are around 2^{25} sec in a year, it would take about 2^{200}/(2A2^{20+29+25}) ≈ 2^{125} years to find the desired discrete logarithm, on average, by trial and error method.
The presumed intractability of the DLP, for appropriate choices of G, in contrast to the relative efficiency of calculating the discrete exponentiation, has made the DLP a basic building block of many cryptographic applications, including publickey encryption algorithms, digital signature schemes and key agreement protocols^{[26,11,1619]}.
DIFFIEHELLMAN MULTIUSER CRYPTOSYSTEM
Prior to 1970, symmetric key cryptosystems had been the cryptomode in existence. In a symmetric key cryptoprotocols, a common key (the master shared secrete key) are used by both communicating parties to encrypt and decrypt messages. These symmetric key cryptoprotocols providehigh speed key and communication throughput but have the drawback that a common (or session) key must be established before communication between parties can be begin. The process of exchanging the cryptokeys is referred to as key distribution and can be very difficult^{[20]}!
It was Merkle who first introduced the basic concept of public key cryptosystems with a view to overcome the key distribution problem^{[7,21]}. However, it was W. Diffie and Hellman^{[3,4]} who were the first to introduce practical publickey cryptography which eliminated the need for key distribution encountered with the privatekey cryptosystems and it is widely used today. The system was discovered independently by GCHQ (British Intelligence) a few years before DiffieHellman found it, but couldn’t tell anyone about their work; perhaps it was discovered by others before. That this system was discovered independently more than once shouldn’t surprise you, given how simple it is! The encoding function here is a trapdoor functionone whose inverse is impractical to implement, unless some extra information is available. This extra information (called the decryptingkey) is not required for encrypting the message, yet is essential for decrypting it in reasonable time. The beauty of such a system is that the encrypting process need not be kept secret. Each user has his own or a personal encryptingfunction, which is public information (hence the name publickey) and a decoding key, which he keeps secret.
The basic concept of publickey cryptoalgorithm: In a publickey cryptosystems
each user places in a publickey server an encryption procedure E. That is,
the publickey server is a directory giving the encryption procedure of each
user. The user keeps secret the details of his corresponding decryption procedure
D. These procedures have the following properties:
a. 
Decrypting the encrypted form of plaintext message C = E(T)
yields T, i.e.: 

D(C) = D(E(T)) = T

b. 
Both E and D are easy to compute. 
c. 
By publicly revealing E the user does not reveal an easy way to compute
D. This means that in practice only he can decrypt messages encrypted with
E, or compute D efficiently. 
d. 
If a message T is first deciphered and enciphered, T is the result, i.e.: 

E(D(T)) = T

An encryption (or decryption) procedure typically consist of a general method and an encryption key. The general method, under control of the key, encrypts a plaintext message T to obtain the form of the message or ciphertext C. Everyone can use the same general method; the security of a given message will rest on the security of the key. Revealing an encryption algorithm then means revealing the key.
When the user reveals E, he reveals a very inefficient method of computing D(C): testing all possible messages T until one finds E(T) = C. If property (c) is satisfied the number of such messages to test will be so large that this approach is impractical.
A function E satisfying (a)(c) is a trapdoor oneway function; if it also satisfies (d) it is a trapdoor oneway permutation. In 1974 the first detailed description of such a oneway function was published^{[7,21]}. That is, a onetoone function f: X→Y is oneway if it is easy to compute a polynomial function f(x) for any x ∈ X but hard to compute f^{1}(y) for most randomly chosen y in the range f. DiffieHellman^{[3,4]} were the first to introduce the concept of trapdoor oneway functions into cryptoalgorithm. These functions are called oneway because they are easy to compute in one direction but (apparently) very difficult to compute in the other direction. They are called trapdoors functions since the inverse functions are in fact easy to compute once certain private trapdoor information is known. A trapdoor oneway function that also satisfies (d) must be a permutation: every message is the ciphertext for some other message and every ciphertext is itself a permissible message. (The mapping is onetoone and onto). Property (d) is needed to implement digital signatures scheme^{[7,8]}. Public key cryptosystems, however, tend to be more computation intensive than symmetric one, many of which are fully executed in hardware alone^{[20]}, but their algebraic foundations provide robust proofs of security that few symmetric cryptoschemes can match^{[2]}.
The mechanics of DiffieHellman algorithm: The DiffieHellman procedure depends on rather magical properties of whole numbers. In the nineteenth century Gauss established an elaborate body of theorems based on the idea of arithmetical remainders. He adopted a notation that is widely used by mathematicians today, y = x(mod n), what this means is that if you divide y by n you get a remainder x (cf. 15 = 2 mod 13). The symbol = (or ≡) is called a congruence relation and simply means equivalent to, while mod is short for modulus, or modulo, we will use both symbols interchangeably. If you multiply two numbers in this system you also still get a number between 1 and 13. For example, 4x6 mod 13 = 24 mod 13 = 11 mod 13. This then is just a notation.
DiffieHellman technique makes use of the apparent difficulty of computing logarithms over a finite field F_{p} with p number of elements. The systems parameters, therefore, consist of a large prime number p and a generator g of the multiplicative group Z^{*}_{p} whose powers modulo p generate a large number of elements. Let:
where, g is a fixed primitive element of F_{p}, then x is arranged to as the logarithm of y to base g, modulo p:
Calculation of y from x is easy, taking at most, 2Alog_{2} p, multiplications^{[22]}.
For example, x = 34:
y = g^{34} = ((((g^{2})^{2})^{2})^{2})^{2}.g
^{2} 
Computing x from y, on the other hand can be much more difficult and, for certain
carefully chosen values of p, requires an ,
using the best known algorithm^{[23]}. Security of DH, therefore, depends
crucially on the security of computing logarithm modulo p and if an algorithm
whose complexity grew as, log_{2} p, were to be found then DH cryptosecurity
can be broken^{[1]}.
CLASSICAL DIFFIEHELLAM KEY EXCHANGE PROTOCOL
Let us consider our usual communicating partners, Alice and Bob and don’t forget the benevolent cracker Eve. Alice must communicate vital information to Bob that must reach him securely. Their communications are monitored by Eve, who must not discover the message. If Alice and Bob could agree on a secret encoding key, they could encrypt their message. Fortunately, Alice knows DiffieHellman algorithm.
Now let’s apply this mathematical procedure to DiffieHellman algorithm. The method works as follows: (i) Both the active participants (say Bob and Alice) must first agree on two randomly generated prime numbers, p and q. Numbers p and q can be publicly known. Parameter p is a prime number and parameter g (usually called a generator) is an integer less than p, which is capable of generating every element from 1 to p1 when multiplied by itself a certain number of times, modulo the prime p. (ii) Each participant must next choose another randomly generated number, perform a mathematical operation that involves p, q and the chosen number; and then transmit the result to the other participant.
Generation of shared key using DH key exchange protocol: The systems
parameters consist of a large prime number p and a generator g of the multiplicative
group Z^{*}_{p} whose powers modulo p generate a large number
of elements. Alice and Bob agree on a prime number p and an integer g that has
order p1 modulo p.(So g^{p}^{1} ≡ 1 (mod p)), but for
any positive s_{A}<p1.) Alice chooses a random number s_{A}<
p and Bob chooses a random number s_{B}<p. Alice sends (mod p) to
Bob and Bob sends (mod p) Alice.
Alice (A) can now compute the secretkey:
Likewise, Bob (B) computes the secretkey:
One may notice that: ,
is the session or shared master secret key between Alice and Bob for future
secure communication. The sessionkeys are the same (since s = s_{A}s_{B}
=s_{B}s_{A}); hence Alice and Bob now have a shared master secretkey
k. Future communications occur using the session key k. Thus, the sessionkey,
k can be used with a privatekey cryptoalgorithms such as Data Encryption Standard
(DES), 3DES or AES, for encryption purposes.
Now Alice uses the master secret key k to send Bob an encrypted version of
her critical message. Bob, who also knows k, is able to decode the message.
Meanwhile, hacker (Eve) see both,
(modp) but she aren’t able to use this information to deduce either s_{A},
s_{B} or g^{s} (mod p) quickly enough to stop Bob from thwarting
her plan. This is the backbone of the DLP cryptosecurity.
The only information that Eve knows is group
If Eve can recover g^{s} from this data then Eve is said to solve DiffieHellman
Problem (DHP). More specifically, we can define DHP as follows: Given a finite
cyclic group G, with generator g ∈ G and elements α, β ∈
G. The DHP asks for γ ∈ G such that: .
It is easy to see that if Eve can find discrete logarithms in G (residue group
modulo p) then she can solve DHP. It is believed for most groups in use in cryptography
that DHP and the Discrete Log Problem (DLP) are equivalent in complexitytheoretic
sense, there is a polynomial time reduction of one problem to the other and
vice versa^{[2,24,25]}. In any case, if one wishes to use the DHP in
particular group as the basis of a cryptosystem, it is necessary that the DLP
be hard in that group! Although there are many groups that have been proposed
for which DHP may be hard and used securelyhowever, in practice there are only
two that are most often used: One in multiplicative group (F_{q})^{*}
of finite field of order q and which is employed in this study. Its slight modification
is employed in Digital Signature Algorithm (DSA)^{[25]}.
However, it turns out that if the numbers are all sufficiently large, it is very hard to calculate the discrete logarithm in a reasonable time. The security of the DiffieHellman algorithm depends on this fact. Alternatively, the eavesdropper can have access to p, q, S_{A} and S_{B} but neither k_{A} nor k_{B}. As a result, the eavesdropper cannot calculate k.
If p is a prime slightly less than 2^{n}, then all quantities are representable as nbit numbers. Exponentiation then takes at most 2n multiplications, mod p, while by hypothesis taking logs require, q^{1/2} = 2^{1/2}, operations. The cryptanalytic effort therefore grows exponentially relative to the computational efforts. If n = 200, then at most 400 multiplications are required to compute k_{A} from s_{A}, or k from k_{A} and s_{B}, yet taking logs, mod p, requires 2^{100} or approximately 10^{30} operations.
Implementation of DH key exchange protocol: In practice the systems parameters consist of a large prime number p and a generator g of the multiplicative group Z^{*}_{p} whose powers modulo p generate a large number of elements. For practical application and security reasons the cryptokeys must be of 1024bit or greater is recommended (Fig. 1). However, here we consider an overly simple numbers to help us understand the basic implementation of DH procedure.
Systems parameters: The two communicating entities, Alice (A) and Bob
(B), both selects the prime p = 12884901893(a 30bit or 10 decimal digits) and
a generator g = 12 of order p1 = 12884901892; where, g, k_{A}, k_{B}
∈ Z^{*}_{p}.
Alice (A):
• 
Select a random integer s_{A} = 2^{10} as
her secretkey. 
• 
Compute and sends to Bob: k_{A}

Bob (B):
• 
Chooses a random integer s_{B} = 2^{15}
<p as his secret key:

• 
Compute and sends to Alice:

Each entity compute shared master secretkey:
Alice:
where shared session; s = s_{B}s_{A} = s_{A}s_{B}
= 33554432;
Bob:
Hence the session master secret key:
k = k_{BA} = k_{AB} = g^{s} = 11093904324 (mod p)
Indeed:
s = s_{A}s_{B} = log_{g}y
= log_{g} 11093904324 = 33554432 
The DH method is used for communication between two people and makes use of three keys: two secretkeys (one for each person) and a session key determined by the two people during the course of the conversation. In other words, the conversation starts with the two people using their own keys; they exchange information to determine a session key which is then used for all future messages. It is important to note that DiffieHellman algorithm is an excellent tool for key distribution, but cannot be used effectively to encrypt and decrypt messages on the fly independent of the person one communicates with (cf. email communication).
THE ELGAMAL ALGORITHM
The algorithm that we will use here is the ElGamal encryption algorithm. Taher ElGamal was the first mathematician to propose a publickey cryptosystem based on the Discrete Logarithm Problem (DLP)^{[4]}. He in fact proposed two distinct cryptosystems: One for encryption and the other for digital signature scheme in 1984. Since then, many variations have been made on the digital signature system to offer improved efficiency over the original system. The ElGamal publickey encryption scheme can be viewed as DiffieHellman key agreement protocol in key transfer mode. Its security is based on the intractability of the discrete logarithm problem (DLP) and the DiffieHellman Problem (DHP)^{[3,4]}.
ElGamal encryption algorithm is very similar to the RSA encryption algorithm in the sense that it is a public key algorithm, which utilizes modular arithmetic on large numbers^{[7]}. However, the mathematics involved is slightly more complicated. Let us start by examining what values constitute to both the public and private keys and how to generate them.
The systems parameters consist of a large prime number p and a generator g of the multiplicative group Z^{*}_{p} whose powers modulo p generate a large number of elements, as in DiffieHellman method. Let’s assume the two entities Alice (A) and Bob (B) wants to communicate. Alice (A) generates a secrete (privatekey) from a randomly chosen large integer number a such that 1≤a≤p2 and computes her publickey A:
Alice’s authentic publickey set is (p, g, A) and privatekey (a, p). Note that g must be less than p and relative prime to p. This is essential to the algorithm because if a and p are not relatively prime, then we will have trouble using our mod operation.
Encryption: Bob (B) encrypts a message M for Alice (A), which A decrypts.
Suppose Bob wishes to send a plaintext message M to Alice, where, M is an integer in the range [0, p1]:
• 
Bob first obtains A’s authentic publickey ring: (p,
g, A) placed in the public key server. 
• 
Generates a large random integer b such that 1≤b≤p–2 
• 
Computes his publickey:B = g^{b} (mod p) 
• 
He uses DH key exchange protocol which they had agreed a prior
to compute the shared masters secrete key: S_{AB} = (A)^{b}
mod p = g^{ab} (mod p) 
• 
Encrypts the plaintext message M: δ = MAS_{AB}
(mod p) 
• 
Sends the ciphertext C = {B, δ} to Alice. 
Decryption: Upon receiving the ciphertext C, Alice uses her privatekey
a to compute:
and recovers plaintext message M by computing: (B^{a}), δ mod
p which can easily be proved as follows:
B^{a}Aδ ≡ g^{ab} Mg^{ab}
≡ M(mod p) 
Implementation of ElGamal encryption with artificially small parameters
Key generation: Entity A selects the prime p = 12884901893 and a generator
g = 12 in Z^{*}_{p} with order p1 = 12884901892.
• 
Alice (A) chooses the privatekey: a = 2^{10} = 1024
<p. 
• 
Computes:A = g^{a} mod p =12^{1024}(mod p)
= 3505577916 
• 
A’s public key ring is:
(P, g, A) = (12884901893, 12, 3505577916). 
Encryption (Bob): To encrypt a plaintext message M = 352247.
• 
Bob (B) selects private key, a random integer:
b = 2^{15} = 32768<p. 
• 
Computes his public key as:
B = g^{b} = 12^{32768} mod p = 9663562615. 
• 
He computes the shared master secrete key using DH key
exchange protocol as: 
S_{AB} = (A)^{b} = (g^{a})^{b} ≡ (3505577916)^{32768} (mod p) = 11093904324
• 
He encrypts ciphertext as: 

δ ≡ MAS_{AB} (mod p) ≡ 553247.11093904324
(mod p) = 9930699416 
• 
B sends C = (B, δ) = (9663562615, 9930699416) to A. 
Decryption (Alice): To recover the message M, Alice (A) does the following procedure:
• 
Receives ciphertext:
C = (B, δ) = (9663562615, 9930699416) 
• 
Computes master shared secret key:

• 
Recovers M by computing:
where, we note that: 
Common systemwide parameters: All entities may select to use the same prime p and generator g, in which case p and g need not be published as part of the publickey. This results in publickeys of smaller sizes. An additional advantage of having a fixed base g is that exponentiation can be expedited via precomputation. A potential disadvantage of common systemwide parameters is that larger modulo p may be warranted. For practical application and security reasons the cryptokeys must of 1024bit or greater is recommended (Fig. 1).
DIGITAL SIGNATURE AND AUTHENTICATION
Authentication is more important than encryption^{[8,2630]}. Most people's security intuition says exactly the opposite, but it's true. Imagine a situation where Alice and Bob are using a secure communications channel to exchange data. Consider how much damage an eavesdropper (Eve) could do if she could read all the traffic. Then think about how much damage Eve could do if she could modify the data being exchanged. In most situations, modifying data is a devastating attack and does far more damage than merely reading it.
One way to address the authentication problem encountered in publickey cryptography is to attach digital signature to the end of each message that can be used to verify the sender of the message^{[7,28]}. The significance of a digital signature is comparable to the significance of a handwritten signature. In some situations, a digital signature may be as legally binding as a handwritten signature. Once you have signed some data, it is difficult to deny doing so laterassuming that the privatekey has not been compromised or out of the owner's control. This quality of digital signatures provides a high degree of nonrepudiation  i.e., digital signatures make it difficult for the signer to deny having signed the data^{[7,8,28]}. This quality is stronger than mere authentication (where the recipient can verify that the message came from the sender); the recipient can convince a judge that the signer sent the message. To do so, he must convince the judge he did not forge the signed message himself! In authentication problem the recipient does not worry about this possibility, since he only wants to satisfy himself that the message came from the sender.
In short, an electronic signature must be a messagedependent, as well as signerdependent. Otherwise the recipient could modify the message before showing the messagesignature pair to the judge. Or he could attach the signature to any message whatsoever, since it is not possible to detect electronic cutting and pasting. To implement signatures the publickey cryptosystem must be implemented with trapdoor oneway permutations i.e., have the property (d), since the decryption algorithm will be applied to unenciphered messages^{[7,8,10]}.
ElGamal was the first cryptographer to introduce the basic concept of digital signature scheme. The ElGamal signature scheme is similar to the encryption algorithm in that the publickey and privatekey have the same form; however, encryption is not the same as signature verification, nor is decryption the same as signature creation as in RSA. The DSA is based in part on the ElGamal signature algorithm^{[7,8]}.
For a long period of time (19851996) after the birth of the ElGamal signature scheme and the family of such signatures (e.g., Schnorr and DSS), it was widely believed that the difficulty of factoring such a signature should somehow be related to the discrete logarithm in a large subgroup of a finite field^{[5,10,28]}. However, no formal evidence (formal proof) was ever established until 1996. Poincheval and Stern succeeded in demonstrating affirmative evidence for relating the difficulty of signature forgery under a signature scheme in the ElGamalfamily signatures to that of computing discrete logarithm^{[26]}. They do so by making use of a powerful tool: The Random Oracle Model (ROM) for proof of security. The ROMbased technique of Pointcheval and Stern is an insightful instantiation of the general ROMbased security proof technique to proving security for the ElGamalfamily signatures.
Digital Signature Algorithm: The Digital Signature Algorithm (DSA) was proposed in August 1991 by the US. National Institute of Standards and Technology (NIST) for use in their Digital Signature Standard (DSS) and was later specified in a US Government Federal Information Processing Standards (FIPS 186^{[29,30]}) called the Digital Signature Standard (DSS). It was designed at the NSA as part of the Federal Government's attempt to control high security involving cryptography. Part of that policy included prohibition (with severe criminal penalties) of the export of high quality encryption algorithms. The DSS was intended to provide a way to use high security digital signatures across borders in a way which did not allow encryption. Those signatures required high security asymmetric key encryption algorithms, but the DSA (the algorithm at the heart of the DSS) was intended to allow one use of those algorithms, but not the other. It didn't work. DSA was discovered, shortly after its release, to be capable of encryption (prohibited high quality encryption, at that), however, it is so slow when used for encryption as to be even more than usually impractical.
The US government based their Digital Signature Algorithm (DSA) on much of ElGamal’s work^{[5]} and is the best known example of a large system where the Discrete Logarithm (DL) algorithm is used. Its security is based on the intractability of the Discrete Logarithm Problem (DLP) in primeorder subgroup of Z^{*}_{p}. As with the RSA algorithm, these transformations raise the computational complexity of the problem. The discrete logarithm system relies on the discrete logarithm problem modulo p for security and the speed of calculating the modular exponentiation for efficiency. In terms of computational difficulty, the discrete logarithm problem seems to be on a par with factoring^{[28]}.
The Mechanics of Digital Signature Algorithm (DSA): The SignatureCreation Data consists of the public parameter an integer y computed as: y = g^{x} mod p, as per the DLP above. Note that p and q are large prime numbers^{[31]}. When computing a signature of a message M, no padding of the hashcode is necessary. However, the hashcode must be converted to an integer by applying the method described in Appendix 2.2^{[30]}.
The basic idea of DSA is for the signer of message M  that is, the possessor of the value x behind the publicly known, g^{x} mod p to append a pair of numbers r and s obtained by secretly picking another number k between 1 and q, computing, r = (g^{k} mod p), (i.e., computing g^{k} mod p) and then taking the remainder of that number mod p) and s = k^{1} (SHA(M)+xr) mod q where, k^{1} is the multiplicative inverse of, k (mod q) and SHA is the Secure Hash Algorithm^{[8]}. He then sends (M, r, s) to the communicating partner. Another NIST standard, SHA (official acronym is SHA1) reduces a character string of any length to a 160bit string of gibberish. In the implementation of DSA, q is a 160bit prime divisor of p1 and g is an element of order q in F^{*}_{p}.
The receiver of (M, r, s) from person g^{x} computes, u = s^{1} SHA(M) mod q and v = s^{1} r mod q and then checks that ((g^{u})(g^{x})^{v} mod q), equals r. If it doesn’t, then, by elementary number theory, something definitely went wrong. If it does, then, according to NIST, you can safely assume that the message M came from the presumably unique individual who knows the discrete logarithm of g^{x}. Table 1 shows the sequence of DSA scheme.
Table 1: 
Digital Signature Algorithm (DSA) 

The security of DSA is based on the assumption that the only attacks are either those that work in the multiplicative subgroup of order q without exploiting any special properties of this group, or else by methods such as indexcalculus ones, which work with group modulo p. There is no proof that some algebraic relations could not be exploited to find an improved algorithm.
Todate digital signature algorithm remains seemingly secure, until the methods of Shanks and Pollard’s running times can be improved substantially or a more effective algorithm with better running time, to threaten its security. Such an alternative algorithm would not require a subexponential technique to break the DSA. A method that runs in time q^{1/4} (group Q = <q> of prime order) would destroy it. Thus, the DSA seemed to have attained both a high level of security and low signature storage and implementation time. This lack of progress in developing better algorithm capable of breaking DSA has provided a subjective feeling of comfort to cryptodesigners and led them to choose a cryptosecurity parameter close to the edge of what is feasible. However, recently the DSA has been superseded by the ECDSA, which is a similar system based on the group of an elliptic curve rather than a finite field^{[16]}.
DISCRETE LOGARITHM RELATED PROBLEM
Let G be a cyclic group and let g be generator of G such that: G = {g^{0}, g^{1},...., g^{n1}}, where, n = G of G. The discrete log function Dlog_{g,G}: G→Z takes input a group element γ and returns the unique α ∈ Z_{n} such that: γ = g^{α}. There are several computational problems related to this function that are used in primitive. In all cases, we are considering an attacker that knows the group G and the generator g, that is, given g^{α} find α(DLP). Alternatively, given g^{α} and g^{β}, find g^{αβ}(DHP).
Methods for solving DLP: Recall communicating partners, Alice and Bob.
We assume that they have decided in advance to use DH protocol, to form shared
master secret key that they can use for secure communication, which is given
by: where, and s = s_{A}s_{B} in range [1, n1], where, n is
the order of the group. Clearly, the key exchange systems, is broken as soon
Eve, the benevolent eavesdropper, can determine s_{A} from the known
k_{A} (or s_{B} from k_{B}). This is the motivation
for us to look at various techniques to solve:
where, g, r and p are known and s need to be determined. While the eavesdropper who happens to have overhead the exchange and thus knows g, and , will hopefully not be able compute g^{S}. The problem of how to solve, g^{s} ≡ r (mod p), is called the discrete logarithm problem (DLP) (i.e., log_{g} r = s)^{[32,33]}. If the discrete log problem for the group G = <g>, order of group is easy, an eavesdropper can compute either s_{A} or s_{B} and can find out what g^{s} is. It is an important open question whether determining g^{s} knowing just is as hard as the discrete log problem in general^{[3436]}. However, it is important to note that, a fast discrete log algorithm would definitely destroy the cryptosecurity and utility of the widely used DiffieHellman cryptoprotocol. The same threat also affects other cryptosecurity based on DLP such as ElGamal cryptosystems and digital signature algorithm (DSA). This factor has generated huge research opportunity on the complexity of the discrete logs^{[3739]}.
For real time practical application when implementing DLPbased cryptoalgorithm like DH and the likes (where the certificate and signed hash have been added to prevent maninthemiddle attack), the area of most concern focuses upon the fact that the specifications for generation of shared master secret key and certificate for authentication purposes, fixes the values of p and g. However, one must be careful since under some conditions, the discrete log is easy to compute and, for this reasons the value of p must be chosen carefully. For example, it is easy to compute the discrete logarithm when p1 has only small prime factors, thereby susceptible to PohligHellman attack, which has time complexity bounded by the largest prime factor of the group. Therefore, for safe prime, p is usually chosen so that (p1)/2 is itself prime, say q. Likewise, some care must be given to the choice of g so that the subgroup generated by g is relatively large, but this is usually an easier choice to make than the choice of p. The time complexity is thus same for both methods (one where the generator creates the whole group and the other where the generator generates only the subgroup with q)^{[40]}.
The best known generalized algorithms for solving discrete logarithms are still quite slow, but the majority of time is usually consumed in easily parallizable precomputations about the group (choice of p and g) in general. Once the precomputation is finished, then any discrete logarithm in that group is easily found. In that aspect and in most applications implementing secure network connectionsit is known that most connections will be created using the only key exchange algorithm defined in the specification. Hence, the time and expense required to break the majority of secure connections is, therefore, only slightly greater than the time and expense required to perform the aforementioned precomputations for the group specified in that standard. One timehonored rule for security design is that the value of the data being protected should be less than the expense required to break the cryptosecurity systems.
GENERAL ATTACKS ON DISCRETE LOGARITHM PROBLEM
There are many ways Eve could implement to acquire the shared master secret key: one option is she could exploit the weakest link in the cryptosecurity systems. This could be via many available options, e.g., breaking the underlying cryptoalgorithm and which in most cases is harder option to be attempted only as a last resort. Instead the eavesdropper might opt to exploit other weaknesses such as: recovering a key by observing the power consumption or electromagnetic radiation of the cryptodevices; finding vulnerabilities in the cryptosecurity protocols or simply revert to stealing the key: Through clever interactive social engineering with those trusted to safeguard the keys. In most cases, however, the cryptoalgorithms are always the most important core tool in cryptosecurity applications.
If we assume that Eve’s has no any other alternative available and so
must resort to bruteforce attack of the core cryptoalgorithm, which in this
case requires her to solve the underlying DLP, i.e.,
In applying bruteforce approach to find, s, from g^{s}, she would have to try: s = 0, 1, 2, .... until a solution is found or, alternatively, to put: g^{0}, g^{1}, g^{2}, .... in a table and look for r. Either way, the complexity is p. If p consists of t bits, we can say that the complexity is given by 2^{t}, so the complexity grows exponentially in t. There are much better methods that she can resort to, which balance the time complexity with available memory, such as is the case with Shank’s BabyStep Giant Step (BSGS) method. Other methods are Pollard’s
rho, SilverPohligHellman and index calculus, which we all discuss in this
paper. Of all this methods, the index calculus method, a subexponentialtime
algorithm, offer better performance in cracking DLP based cryptoschemes. In
case of the prime fields, there is more advanced version of the indexcalculus
known as the Number Field Sieve (NFS), to solve the DLP with expected time^{[41]}:
If q = 2^{m}, there is a variation of the indexcalculus known as Coppersmith’s
algorithm, to solve the DLP with expected time^{[38]}:
for some c<1.587. Note that the DLP is still considered hard in these groups because the runtimes of these algorithms are not bounded by any polynomial in, q. However, the existence of these subexponentialtime algorithms means that one must use larger key sizes than if only exponentialtime attacks were known. For example, for practical security reasons it is recommended that prime fields, p should have at least 1024 bits (Fig. 1).
Shank’s BabyStep GiantStep (BSGS): Suppose that one has enough memory available to store m elements of Z_{p}. Then the Shanks algorithm gives us an efficient method to balance the time complexity with the available memory to solve the discrete log: The method require one to compute sort this element in a list to allow for easy lookup table, where incase of the baby steps (BS) the exponents increase by 1. Next check if the r is in the table, if not, then one checks if r/g^{m} is in the table, if not check for r/g^{2m} and continue, Giant Steps (GS). When r/g^{im} is in the table, say it equals one has found the unknown exponent s = im+k. The time complexity of the babystep method is p/m, so the product of memory requirement and time complexity is still p≈2^{t}.
Shank’s BabyStep GiantStep (BSGS): System requirement: one has enough memory available to store m elements of Z_{p}.
Input: A finite group G = <g> of order n and g, y ∈ G.
Output: The discrete logarithm of y to the base g.
Procedure:
• 
Compute the ceiling of square root 
• 
Construct a table T of (i, g^{i.m}) pairs, 1≤i≤m
and sort by the second component in the pair. 
• 
Compute y.g^{j}, starting at j = 0 . ((y.g^{j})
might be equal to g^{x+1} for some x and j). 
• 
Compare yAg^{j} with g^{i.m} entries in T.
If a match is found then g^{i.m} = g^{x+j} and thus x =
iAmj, is the desired discrete log. If there is no such match, then try
another value of j, with j≤m 
Time complexity:
and Space complexity .
As an example, let Then g = 2 is the generator of G with order of n = p1 = 138. We look for the discrete logarithm of y = 43 to the base g.
Implementing BSGS: First let’s compute m =
which we use to construct table T of (i,g^{i.m.}) pairs, for 1=i =1 2
Comparing the two table entries, we can observe that we have a match in the
entries for i = 10 and j = 5, which gives: i.e., the discrete logarithm of 43
to the base 2 in Z^{*}_{139} is x = 1205 = 115. Hence,
In Shanks’ algorithm, the checking for matching from the two sorted lists of m entries each can be done in linear time (assuming that the representations of elements are compact enough). Hence, the running time of the algorithm is dominated by the arithmetic required to compute the two (lists) tables and the time to sort them. Further, Shanks’ algorithm is also known to be deterministic. Therefore, if one is willing to give up on determinism, one can replace sorting by hashing, thereby speeding up the process. On the other hand, there is no easy way to reduce space requirements (other than by increasing the running time), which are order m: <g>^{1/2[42]}. There are other general algorithms for discrete log problem that run in time O(<g>^{1/2}) and very little space, both methods are randomized and are due to Pollard^{[13]}, which we discuss next.
Pollard’s Rho (ρ) method for solving DLP: Basics IdeaPollard’s
Rho algorithm is based on the birthday paradox^{[43,44]}. If we randomly
choose elements (with replacement) from a set of N numbered elements, we only
need to choose about
elements until we get one element twice (called a collision). This can be applied
to find discrete logarithms as follows. By choosing a, b ∈_{R}
[0, N1], one obtains a random group g^{a}h^{b}. Such group
elements are randomly selected until we get a group element twice. If represent
the same group element then whence:
The Pollard’s rho method has an expected storage requirement given by
where, T is the random variable describing the number of group elements chosen
until the first collision occurs. The main thrust of Pollard’s rho method,
is then how to detect collision without the need to store
group elements. The collision in this method is done by means of a random function:
G → G For actual implementations, f is chosen such that it approximates
a random function as closely as possible.
The originally suggested function by Pollard (for Z^{*}_{p}) can be generalized towards arbitrary cyclic groups given by:
where, S_{1}, S_{2} and S_{3} are three sets of roughly the same size which form a partition of G. However, Teske^{[39,40]} has shown that the Pollard’s function f is not random enough and gives alternative and better function as:
where, again M_{s} are roughly of the same size and form a partition of G, which this time is partitioned into more than three subsets. For both functions, it is of course necessary that determining the subset M_{i} and S_{i}, respectively, to which a group element belongs is very efficient.
By starting at a random point and iteratively applying a random function, random points are generated. Because the group is finite, we eventually arrive at a point for the second time (i.e., a collision occurs), which happens after expectation E(T), thereby, the sequence of subsequent points then cycle forever. With very little time and space overhead, it is possible to detect such a cycle with Floyd’s cyclefinding algorithm (or with an improved variant by Brent^{[45]}).
The Mechanics of Pollard’s Rho (ρ) method
Input: A finite group G = <g> of order n and g, y ∈ G.
Output: The discrete logarithm of y to the base g.
Procedure:
• 
Break the set G into three approximately equalsized sets
S_{1}, S_{2} and S_{3}. 
• 
Define a sequence of elements over G: x_{0}, x_{1},
x_{2},.... as:
x_{0} = 1 
• 
This sequence, in turn, defines two other integers sequences
a_{i}, b_{i} such that 
a_{0} = b_{0} = 0
• 
Cyclefinding calculation: The next step is to find
a pair (x_{i}, x_{2i}) with x_{i} = x_{2i.} 
In this case:
As long as
the discrete log can be calculated as above. In the rare case that a collision
x_{i} = x_{2i} is not found, or that b_{i}≡b_{2i} modn, the procedure can be repeated
by selecting random a_{0}, b_{0}∈[1,n –1]and restarting with
Timecomplexity:
Spacecomplexity: O(1)
Implementing Pollard’s Rho for solving DLP: As an example, let
H = Z^{*}_{383}. Then g = 2 is a generator of the subgroup G
of Z^{*}_{383} of order n = 191. Suppose y = 228. Partitioning
G into three sets according to the rule: x ∈ S_{1} if x ≡ 1 mod 3; x ∈ S_{2} if x ≡ 0 mod 3 and x ∈S_{3} if ≡2 mod 3 and setting x_{0} = 0; a_{0} = 0 and b_{0} = 0 results:
The calculations show that: x_{14} = x_{28} = 144.
Computing:
Results:
Indeed:
Pollard’s Lambda (λ) method: Pollard’s lambda method is known as Method for catching Kangaroos.
Input: A finite group G = <g> of order n; g, y ∈ G and value
w standing for the size of an interval in which the discrete logarithm lies,
e.g.,
Output: The discrete logarithm of y to the base g.
Procedure:
• 
Compute two sequences T and W (called Kangaroo trails). The
T sequence is where: 
Actually, this will occur if W’s trail hits any point
T’s trail begins at y’_{0} = g ^{B} gand proceeds till Note that
where,
is the distance from y’_{0} y’_{i}
W’s trail begins at y_{0} = y = g^{x}. The search ends
when for y_{M} in W’s trail, at which point the discrete logarithm
is calculated as:
• 
If no collision (i.e., ) occurs (the probability of which
can be controlled) before d_{M} exceeds B+d_{N}A = w+d_{N},
then the limit is terminated (failure). W has traveled beyond the trap.
Subsequent iterations of the above sequences can be run as required. 
Timecomplexity:
and Spacecomplexity: O(log w). The Shanks method and the Pollard’s kangaroo
method can be used to compute the discrete log of y in about
steps when this discrete log is known to lie in an interval of length at most
m. Hence, cryptodesigners have to be careful not to limit the range in which
discrete logs lie.
Todate the running times of Pollard and Shanks algorithms have not been improved to any substantial. This has since led to the assumption that in the absence of other structure in a cyclic group G = <g> of prime order, it will require on the order of G^{1/2} operations to compute a discrete log in G. Many of the modern asymmetric key cryptosystems based on DLPs, such as DSA^{[2,8,9]}, rely on Schnorr method^{[10]}, which reduces the computational burden normally imposed by having to work in a large finite field by working within a large multiplicative subgroup Q = <q> of prime order. With an assumption that the discrete log in Q cannot be solved much faster than steps. For q of order 2^{160}, as in DSA, this is about 10^{24} group operations. Since group operations are typically considerably more intensive than the basic instruction of ordinary computers (^{[46]} for the case of ECC), it is reasonable to estimate that 10^{24} group operations might require at least 10^{26} ordinary computer instructions. A mipsyear (MY) is equivalent to about 3A10^{13} instructions, so breaking DSA, say, with Shanks or Pollard algorithms would require over 10^{12} MY, which appears to be adequate for a while at least. Several cryptoresearcher, including Richard Crandall and Len Adleman, have observed that all the instructions executed by digital computers in history are on the order of Avogadro’s number, about 6A10^{23}. The largest factoring projects so far have used 10^{17} operations and other large number distributed projects have accumulated on the order of 10^{20} operations (Table 2).
Table 2: 
Computing power available for integer factorization (in MY) 

The PohligHellman Algorithm: Here we present the PohligHellman algorithm for computing discrete logarithms^{[47]}. If G is not simple, it will reduce a DLP in G to several DLPs in smaller groups. It thus restricts the possible choices of groups in which the DLP is maximally hard. The idea is to take advantage of manytoone homomorphic images in which the DLP is easier to solve. This will be important in the sequel because generalizations suggest that, even if one considers structures other than groups of the DLP, one should consider using simple structures. Specifically, using simple structure is the most reliable to avoid similar attacks^{[2,47,48]}.
The mechanics of PohligHellman Algorithm
Input: A finite group G = <g> of order n; g, y ∈ G.
Output: The discrete logarithm of y to the base g.
Procedure:
1. 
Let the factorization of n be 
2. 
For i = 0 to r compute the decomposition

3. 
Set 
4. 
Compute h_{j} for j = 0 to e_{i}1 do compute
compute
(e.g., using Pollard’s rho method)
set

5. 
Use the CRT to compute the discrete log x with 0≤x≤n1 and with
1≤i≤r. 
6. 
Output x. 
Note that there are several wellknown algorithms for performing step 5 in
polynomial time. The idea is that in step 1 one can find 2, one can find
This is further reduced by finding the basep expansion of x_{i} one
digit at a time. Observe that each time step 4 is reached,
has order p_{i}. Thus, one need only compute discrete logarithm in subgroups
of order p_{i}. Unless there is some trivial way to solve the DLP in
G, this is more efficient than computing one discrete logarithm in the full
group with order n. It is thus desirable that one should choose G with prime
order so this algorithm yields no reduction at all.
In general, we observe that if G is not simple, then there exist groups G_{i}
and homomorphisms of the form, f_{i} : G → G_{i} with trivial
kernels. One may then solve the corresponding DLP in each homomorphic image.
Furthermore, if there exist such G_{i}, such that:
is a monomorphism, then solving the DLP in each G_{i} solves the DLP in G up to an application of the Chinese remainder theorem.
Time complexity:
provided the factorization of n is given.
Space complexity: O(1) if used in conjunction with Pollard’s rho method, for example.
Implementing PohligHellman Algorithm: Let G = Z^{*}_{271}, g = 43 a generator of Z^{*}_{271} of order n = 270. Let y = 210. The discrete logarithm is computed as follows:
• 
The prime factorization of n = 270 = 2•3^{3}•5. 
• 
(Compute x_{i} ≡ xmod2)

• 
(Compute
x_{3} ≡ xmod5)

• 
Solving the triple of congruences: 
X ≡ 1(mod2) 
X ≡ 2(mod5) 
X ≡ 35(mod27) 
we use CRT, which gives us the X ≡ 197(mod270)
such that x = log_{43}210 = 197
Indeed: 43^{197} ≡ 210(mod)271
INDEXCALCULUS METHOD
Over finite fields where the DLP is defined, there is another additional structure beyond the multiplicative structure. The indexcalculus methods take advantage of this extra structure^{[13,49]}.
The index calculus method is subexponential algorithm for solving the DLP over a finite group G, i.e., given g, y ∈ G, g a generator, we seek to find a value β ∈ Z/(G)Z satisfying y = g^{β}. We say that β = log_{g} y or β = ind_{g} y, for the index of y in g.
The basic idea, which goes back to Kraitchik^{[50]}, is that if:
for some elements of GF(q)*, then:
If we collect many equations of the above form (with at least one of them involving an element z such as g, for which log_{g} z is known) and they do involve many x_{i} and y_{j}, then the system can be solved. This is similar to the situation IFP^{[51]}. Progress in index calculus algorithms has come from better ways of producing relations that lead to equations such as Eq. (11).
Index Calculus method involve forming relations in what is called a factor basis, F = {p_{1}, p_{2}, ...., p_{1}}, which is formed by choosing a list of the first v primes from the order n of the finite group G. How many we choose is critical. The factor basis, F consists of all integers whose prime factors are all less than or equal to the largest prime on our list.
Once the number of relations found equals the number of prime factors of the elements in the factor basis we can solve the system of equations to recover the discrete logs of these primes. From these logs we can then, with more computation, recover the discrete log of any chosen element.
In determining the discrete logs of the elements, we start by looking at the
exponentiations of the generator: g, g^{2}, g^{3}, ...., mapping
these values to the integers if the field happens to be F_{p} instead
of Z_{p}, which we can write in product form:
where, p_{i} ∈ F random integer k is such that 1≤k≤p1. Compute: g^{k} ∈ G. If any value of g^{k} is in F, we record it and following relations. We can derive from g^{k}’s factorization into powers of their first 1 primes and the fact that Z_{p} has order p1.
where, p_{i} is the i^{th} prime from our factor base and e_{i}
is its corresponding exponent in the factorization of g^{k}. We continue
computing powers of the generator until we obtain v independent relations. We
solve these equations (which are typically sparse) to get the discrete log of
each of the v primes. Now, to find the discrete log of y ∈ F_{p}
we compute the quantities y, y.g, y.g^{2}, .... and lift these as well
to Z if needed. We continue the computation until we find an element, y.g^{β},
that factors completely using our factor basis such that:
and taking logs on both sides, gives:
where, e_{i} is its corresponding exponent to the i^{th} prime.
We already know the discrete log of each of the v primes, so we can just solve
this equivalence for, log_{g} y. Here, we get a runtime that is subexponential
in p provided we make a good choice for ^{[52]}
How can we subvert this attack? We can use really large keys as previously discussed, but this has significant drawbacks. The system becomes even slower with larger keys making it undesirable. Additionally, large keys require more computation space so such a cryptosystems cannot fit on small form, constrained environment wireless devices that also need to utilize encryption e.g., smartcards. Since larger keys do not seem to offer much of a fix, we need to consider other alternative cryptosystems like ECC which make use of ECDLP instead of the classical DLP, for more detail on ECDLP^{[1820]}.
The mechanics of indexcalculus
Input: A finite G = <g> of order n and g, y ∈ G
Output: The discrete logarithm of y to base g
Procedure:
• 
(Factor Base) find a subset F = {p_{1}, p_{2},
AAAA, p_{1}} of G such that a significant fraction of all elements
in G can be efficiently expressed as a product of elements from F. 
• 
Select a random integer k, 0≤k≤n1 and compute g^{k}. 
• 
Try to write g^{k} as a product of elements from F: 
• 
If Eq. 15 is successful, then taking the discrete log of both sides of Eq. 15 results in the linear relations: 
• 
Collect (t+c) linear relations like Eq. 16,
by choosing other random k’s 
• 
(Discrete Logs of elements F) working modulo n, solve the
linear system of (v+c) equations in v unknowns and obtain the value of the
discrete logarithms of the factor base: log_{g} p_{i}, 1≤i≤v. 
• 
(Actual computation of the discrete log) select a random r,
0≤r≤n1 and compute y.g^{r}. 

Try to write y.g^{r} as a product of elements of F: 
• 
If y.g^{r} is not repsentable as (17) then choose
another r and retry the factorization (17). Otherwise, taking discrete logs
of both sides of (3) results: 
Timecomplexity:
• 
L_{p}
for G = Z^{*}_{p} using NFSvariant of index calculus. 
• 
for G = GF(2^{m}) and c<1.587 
Implementation of indexcalculus method: To help us under the power
of cracking DLP based cryptosystems, we will use Index Calculus to recover the
master shared secret key we found using DH key exchange cryptoprotocol. Recall
too, that Alice and Bob used the same key to exchange secure data using ElGamal
cryptoalgorithm. Suppose Eve at her nondescriptive hideout seeks to recover
the key, so she needs to solve:
g^{s} ≡ 11093904324 (modp) 
where, s(= s_{A}s_{B}) is the shared key, g = 12 is the generator
and p = 12884901893 is the prime integer.
We start with a precomputation of the righthand side, i.e., 11093904324. We consider our factor base which consist of the first 15 prime numbers:
F = {p_{1}, p_{2},ÿ, p_{15}}
= {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47}
and try to solve the logarithm problem for the all elements in the factor base.
In other words, we want to solve:
Now select a random exponent integer s and compute 12^{s} (mod p) and
check if the residue can be factored completely by means of the factor base.
For example:
12^{422} ≡ 12084561537 ≡ 3A17A23A10302269
(mod p) 
where, we observe that 10302269 cannot be factored further over our factor
base F. We also recall that the larger the factor base, the easier we find the
residue that do completely factor with respect to the factor base, but the price
we pay is having the more unknowns k_{i} completely factor with respect
to our factor base, we end up with linear relation between the unknown k_{i}’s,
e.g.,:
12^{9625853812} = 4616675 =5^{2}A7A23A31A37
(mod p) 
gives the relation:
9102048310 = 2Ak_{1}+k_{2}+k_{3}+k_{4}+k_{5}+k_{6}+k_{7}
(mod p1) 
where, each k_{i}’s is given by, k_{i} = log_{g} p_{i} (mod p).
Next we collect enough relations to enable us solve the unknown k_{i}’s,
e.g.,:
12^{9102048310} =78540=2^{2}A3A5A11A17
(mod p)
12^{7382258392}=254646 = 2A3^{2}A7A43A47
mod p)
12^{12298253855}=26455=5A11A13A37 (mod p)
12^{4835947410}=119510 =2A5A17A19A37 (mod p)
12^{6525201043}=4238785=5A23A29A31A41 (mod p)

where, we get the following linear relations:
9102048310=2k_{1}+k_{2}+k_{3}+k_{4}+k_{5}+k_{7}
(mod p1)
7382258392=k_{1}+2Ak_{2}+k_{14}+k_{15}
(mod p1)
12298253855=k_{3}+k_{5}+k_{6}+k_{12}
(mod p1)
4835947410=k_{1}+k_{3}+k_{7}+k_{8}+k_{12
}(mod p1)
6525201043=k_{3}+k_{9}+k_{10}+k_{11}+k_{13}
(mod p1)

The solutions are given as follows:
k_{1} = log_{12} 2 = 420700703
k_{2} = log_{12} 3 = 4470896487
k_{3} = log_{12} 5 = 3803513117
k_{4} = log_{12} 7 = 2365150781
k_{5} = log_{12} 11 = 3116054641
k_{6} = log_{12} 13 = 3492178431
k_{7} = log_{12} 17 = 12702231662
k_{8} = log_{12} 19 = 8006496046
k_{9} = log_{12} 23 = 1050306206
k_{10} = log_{12} 29 = 127183706
k_{11} = log_{12} 31 = 9601764817
k_{12} = log_{12} 37 = 1886507666
k_{13} = log_{12} 41 = 4827335089
k_{14} = log_{12} 43 = 7844296260
k_{15} = log_{12} 47 = 9793819458

where, all k_{i}’s are computed modulo p1.
Finally, we are ready to solve our original problem: 12^{s}≡11093904324
(mod p). Pick a random exponent s and combining with y and then check if, yAg^{s}
(mod p), completely factors over our factor base. After a few tries we get lucky,
such that:
y.g^{s} ≡110993904324A12^{8766128316 }(mod p)
≡
19050076 =2^{2}A3^{3}A11^{2}A31A47

log_{g} y+s ≡ 2k_{1}+3k_{2}+k_{5}+k_{11}+k_{15}
(mod p1) 
log_{g} y ≡ 2k_{1}+3k_{2}+2k_{5}+k_{11}+k_{15}
s (mod p1) ≡
(2A4207002703+3A4470896487+96017648457+ 97938194568766128312 ≡33554432
(mod p1)

→Ylog_{g} y = log_{12} 110930904324 = 33554432
Indeed: 12^{33554432} ≡ 11093904324 (mod p)
Time complexity of the Index Calculus is given by, exp (1.923t^{1/3}
(in t)^{2/3})^{[53]}. The memory requirement typically equals
the square root of this the time complexity.
Table 3: 
The complexity of different methods to take discrete logarithms
for p≈2^{t} 

This makes the index calculus algorithm
and variants, the only method for taking discrete logarithms with subexponential
complexity, Table 3.
THE FUTURE STATE OF THE ART IN IFP AND DISCRETE LOGARITHM COMPUTATION
The basic question at the onset for implementer of the DLPbased cryptoalgorithms is how large discrete log problems that can be handled by current available state of the art computational tools? For a conservative estimate, it is appropriate to warn that to obtain a proper estimate of discrete log problems it is better to consider what has been accomplished in Integer Factorization Problem (IFP). Much more effort has been devoted to IFP than to discrete logs and most of the leading algorithms are similar. Thus, although discrete logs in prime fields do appear harder than factoring integers of the same size, it is prudent to disregard this difference when choosing cryptosecurity implementation. Number Field Sieve (NFS) is currently at the cuttingedge of research into integer factoring algorithm capable of factoring large composite numbers over 100 digits^{[54]}. The current record in factoring a generally hard integer is that of the 200 decimal digits challenge integer from RSA Data Security, Inc., RSA200, which was accomplished with general number field sieve (GNFS) was factored on May 9, 2005 by Bahr, Boehm, Franke and Kleinjung^{[55]}. Among the Cunningham integers, the record is the factorization of 248 decimal digit integer by Special Number Field Sieve (SNFS) was factored by Aoki, Kida, Shimoyama, Sonoda and Ueda (CRYPTREC) on April 04, 2004^{[56,57]}.
Computing power is measured in MIPSyears: a millioninstructionspersecond
computer running for one year or about 3x10^{13} instructions. A 100MHz
Pentium III is about a 50MIPS machine; a 1600node Intel Paragon is about 50,000
MIPS. In 1983, a Cray XMP supercomputer factored a 71digit number in 0.1 MIPSyears,
using 9.5 CPU hours. That's expensive. Factoring the 129digit number in 1994
required 5000 MIPSyears and used the idle time on 1600 computers around the
world over an eightmonth period. Although it took longer, it was essentially
free.

Fig. 2: 
Comparison of security levels of ECC and RSA/DSA 
These two computations used what's called the quadratic sieve , but a newer,
more powerful algorithm has arrived. The general number field sieve is faster
than the quadratic sieve for numbers well below 116 digits and can factor a
512bit number over 10 times fasterit would take less than a year to run on
an 1800node Intel Paragon. Figure 2 for current security
level involving MIPSyears estimation featuring public keys.
Odlyzkol in paper^{[58]} argues that, given the record of improvements in the index calculus algorithms, it seems imprudent to assume that the current version of GNFS is the best that will be available for a long time. At the least, it seems a reasonable precaution to assume that future algorithms will be as efficient as today’s SNFS, in which even 1024 bit RSA moduli might be insecure for anything but shortterm protection.
Therefore, the baseline and tradeoff, is the size of n should be chosen such
that the time and cost for performing the factorization exceeds the value of
the secured/encrypted information. But even then, great care must still be taken
in the overall cryptodesign, as current development in integer factorization
have gone much faster than foreseen and it is a precarious matter to venture
upon quantitative forecasts in this field. Moreover, one should realize that
it always remains possible that a new computational method could be invented
from unsuspecting quarter, which makes factoring easy (e.g., quantum computing,
if an operative quantum computer were to be realized in the notso distance
future)fortunately or unfortunately depending on which side you are onno one
knows how to build one yet! Shor^{[59,60]} showed that if such machine
could be built, integer factorization and discrete logs (including elliptic
curve discrete logs) could be computed in polynomial time. This result has stimulated
an explosion in research on quantum computers^{[61]}. The one comforting
factor is that all experts agree that even if quantum computers are eventually
built, it will take many years to do so (at least for machines on a scale that
will threaten modern public key cryptosystems) and so there will be advance
warning about the need to develop and deploy alternative cryptoalgorithms.
CONCLUSIONS
We have discussed a method for implementing a publickey cryptosystem whose security rests in part on the difficulty of solving discrete logs. If the cryptosecurity designs and methods are appropriately implemented, it permits secure communications to be established without the use of courier to carry keys and it also permits one to sign digitized documents.
In general, the strength of encryption is related to the difficulty of discovering the key, which in turn depends on both the cipher used and the length of the key. No matter which technique you choose, you must keep in mind that a desperate cryptanalyst can always decipher the message. Hence, you should always take all the necessary precautions to protect your data. Those precautions range from proper choice of cryptographic keys to physically protecting your assets and yourself.
In overall the baseline and tradeoff, is the size of n and/or prime p should
be chosen such that the time and cost for cracking the cryptosystems exceeds
the value of the secured/encrypted information. But even then, great care must
still be taken in the overall cryptodesign, as current development in integer
factorization and equivalently solving DLP, have gone much faster than foreseen
and it is a precarious matter for one to venture upon quantitative forecasts
in this field. Moreover, one should realize that it always remains possible
that a new computational method could be invented from unsuspecting quarter,
which makes factoring ‘easy’ (e.g., quantum computing). Today, the
wise cryptodesigner is ultraconservative when choosing key lengths for a public
key cryptosystems. He must consider the intended security, the key’s expected
lifetime and the current state of art in factoring or equivalently solving DLP.