INTRODUCTION
The jobshop problem is regarded as one of the hardest in combinatorial optimization (Brinkkotter and Brucker, 2001). During the past decades, many researchers have been focusing on the job shops and proposed several effective algorithms for them. They can be classified as optimization and approximation algorithms. The optimization algorithms are usually based on the branch and bound scheme (Brucker et al., 1994; Carlier and Pinson, 1994). These algorithms have made considerable achievement; however, their implementation needs high computational cost. On the other hand, approximation algorithms are more effective for large size problem instances. Among the heuristics, a successful approach is the Shifting Bottleneck (SB).
SB was, first, presented by Adams et al. (1988). Later DauzerePeres and Lasserre (1993), Applegate and Cook (1991), Balas et al. (1995) and Mukherjee and Chatterjee (2006) have enhanced this method. There have been extensive computational experiments evaluating the performance of several versions of SB routine on different shop configurations (Demirkol et al., 1997; Holtsclaw and Uzsoy, 1996; Uzsoy and Wang, 2000). The general consensus is that the SB performs quite well compared to various dispatching rules on almost all problem types.
The main strategy of the SB lies in relaxing the problem into m single machine subproblems and solving each single machine independently. This approach consists of four functions: problem decomposition, bottleneck identification, subproblem scheduling and reoptimization. On a specified scheduling criterion, the machine having the maximum lower bound is selected as the bottleneck machine and the SB sequences the bottleneck machine first while ignoring the remaining unscheduled machines. After the machine is scheduled, the reoptimization procedure is triggered. The SB algorithm repeats the single machine scheduling procedure until all machines are scheduled (Wu et al., 2006).
Subproblems solution procedures (SSPs) that are single machine problems and reoptimization are two main functions in SB. Holtsclaw and Uzsoy (1996) tested the quality and efficiency of the solution on 12 combinations of machine criticality measures that used for selecting shifting machines and subproblem solution procedures (SSPs). According to their research, the more sophisticated algorithm usually has the better efficiency. Demirkol et al. (1997) found that the better SSPs and reoptimization has higher solution quality. Uzsoy and Wang (2000) modified the testing structure of Demirkol et al. (1997) to distinguish the bottleneck machines from nonbottleneck. They found that a system with more significant bottleneck machines might have higher search efficiency.
In the large scale problems, implementing heuristic approaches for subproblems
solution procedure in SB can decrease computational efforts. Although using
the exact approaches such as Carlier algorithm (1982) for single machine (subproblems)
can improve solution quality of the job shop problems, this increases computational
effort in large scale problems. Wenqi and Aihua (2004) presented a heuristic
as Schrage algorithm with disturbance for solving subproblems, base on presented
improved shifting bottleneck (ISB) and showed that it has a better performance
than SB. However, it has some drawbacks. This study tries to resolve some of
the drawbacks and present a modified Schrage algorithm that is more effective
than previous Schrage algorithm and Schrage algorithm with a disturbance (DS).
Reoptimization is one of the important problems in SB that can increase computational efforts especially in the large scale problems. This study presents a new approach that can omit reoptimization.
Job shop problem: The job shop scheduling problem can be described as follows: There are a set of jobs and a set of machines. Each job consists of a series of operations and each operation with fixed processing time is processed by a certain machine. The problem consists in scheduling the jobs on the machines with the objective to minimize the makespan the time needed to finish all the jobs. Any schedule is subjected to two constraints: (i) the precedence of the operations on each job must be respected; (ii) once a machine starts processing an operation it cannot be interrupted and each machine can process at most one operation at a time.
Let N = {0, 1,…, n} denotes the set of operations (with 0 and n the dummy operations .start. and .finish.), M the set of machines, A the set of pairs of operations constrained by precedence relations and E_{k} the set of pairs of operations to be performed on machine k and which therefore cannot overlap in time. Further, let p_{i} denote the (fixed) duration (processing time) and t_{i }the (variable) start time of operation i. The job shop problem can then be stated as:
Any feasible solution to (P) is called a schedule. It is useful to represent
this problem on a disjunctive graph G = (N, A, E), with node set N, ordinary
(conjunctive) arc set A and disjunctive arc set E (Balas, 1969). Conjunctive
arcs represent precedence relations and pairs of disjunctive arcs correspond
to operations that must be processed on the same machine. Figure
1 shows the graph for a problem with 8 operations (on three jobs) and three
machines.

Fig. 1: 
Digraphy an instance 
A selection S_{k} in E_{k }contains exactly one member of each
disjunctive arc pair of E_{k}. Actually, determining an acyclic section
on S_{k} is equivalent to sequencing the jobs (operations) on machines
k. A complete selection S consists of the union of selection S_{k},
one in each E_{k}, k M.
In the language of disjunctive graphs, the problem is to find an acyclic complete
selection S ⊂ E that minimizes the length of a
longest path in the directed graph D_{S}.
Shifting bottleneck: Shifting Bottleneck is a heuristic approach that uses in job shop scheduling problems. The main strategy of the SB lies in relaxing the problem into m single machine subproblems and solving each single machine independently. This approach consists of four functions: problem decomposition, bottleneck identification, subproblem scheduling and reoptimization. On a specified scheduling criterion, the machine having the maximum lower bound is selected as the bottleneck machine and the SB sequences the bottleneck machine first while ignoring the remaining unscheduled machines. After the machine is scheduled, the reoptimization procedure is triggered. The SB algorithm repeats the single machine scheduling procedure until all machines are scheduled (Wu et al., 2006).
Let M_{0} be the set of machines that have already been sequenced,
by choosing selections Sp (pM_{0}).
Let (P(k, M_{0})) be the problem obtained by replacing each arc set
Ep (pM_{0})
with the corresponding selection Sp and deleting each arc set Ep (pM/M0{k}).
A bottleneck machine mM/M0
is such that v(m,M0) = max{v(k,M0): kM/M0},
where v(k,M0) is the value of a good solution to (P(k,M0)) Fig.
2 (Wenqi and Aihua, 2004; Adams et al., 1988).
Let M_{0} be the set of already sequenced (M_{0} = Φ at the start).
In step 1, to identify the next bottleneck machine to be sequenced, for each kM/M_{0 }the following problem should be solved:
Also, to reoptimize in Step 2 the sequence of each critical machine kM_{0},
for each of such machines, a problem of the form (P (k, M`_{0})) is
solved for some subset M`_{0} ⊂ M_{0}.

Fig. 2: 
Shifting bottleneck procedure 
The problem (P(k,M0)) is equivalent to that of finding a sequence for machine
k that minimizes the maximum lateness, given that each operation i to be performed
on machine k has, besides the processing time P_{i}, a release time
r_{i} and a due date d_{i}. Here, r_{i} = L(0, i) and
d_{i} = L(0, n) L(i,n) + p_{i}, with L(i, j) the length of a
longest path from i to j in D_{t} and T: = ∪(S_{p}: pM_{0}).
This latter problem in turn can be viewed as a minimum makespan problem where
each job has to be processed in order by three machines, of which the first
and the third have infinite capacity, while the second one (corresponding to
machine k in the above model) processes one job at a time and where the processing
time of job i is r_{i} on the first machine, p_{i} on the second
and q_{i} : = L(0,n) d_{i} on the third machine. The numbers
r_{i} and q_{i} are sometimes referred to as the head and tail
of job i (Wenqi and Aihua, 2004; Adams et al., 1988).
Thus, the onemachine problems solved during the algorithm are of the following form:
Where, r_{i} and q_{i} are defined as above and N* is the set
of jobs to be processed on machine k.
The problem (P*(k, M_{0})), is a single machine problem with head and tail that is denoted the problem 1r_{j}, q_{j}Cmax in the literature. Further we present a heuristic for solving this problem.
A new heuristic for single machine: It is shown that the problem 1r_{j}, q_{j}Cmax is strongly NPhard. A number of special cases of this problem are solvable in polynomial time (Vakhania, 2004). The enumerative methods for solving 1r_{j}, q_{j}Cmax are studied by Baker and Su (1974), McMahon and Florian (1975), Carlier (1982) and Grabowski et al. (1986). The algorithm proposed by Carlier was successfully tested even for 10000 jobs (Lawler et al., 1982; Grabowski et al., 1986). Shi and Pan (2006) present three improved branchandbound algorithms based on Carlier algorithm and other existing techniques that substantially outperform Carlier algorithm in terms of CPU time and robustness.
Schrage algorithm, a heuristic method suggested by Schrage (1971), is an effective algorithm that is used in single machine problem. The algorithms of McMahon and Florian (1975) and Carlier(1982) are based on Schrage algorithm (Lenstra (1977), Carlier (1982) and Potts (1980) for generating a good solution of the 1r_{j}, q_{j}C_{max} problem). In both methods Schrage heuristic is used at every node of a search tree to generate a complete solution. Thus, a good solution of Schrage heuristic can decrease the space of search in the enumeration approaches such as Carlier algorithm. Also computational efforts of heuristic approaches such as Schrage algorithm is lower than the exact algorithms such as branch and bound (Carlier algorithm) in single machine problem. This can lead to decrease computational efforts in the job shop problems (Wenqi and Aihua, 2004).
In a single machine problem with heads and tails, n independent jobs should be sequenced on a machine: a job i is available for processing by the machine at time r_{i}, has to spend an amount of time p_{i }on the machine and an amount of time q_{i} in the system after its processing by the machine. The objective is to minimize the makespan.
Carlier (1982) shows a sequence, in this problem, with a conjunctive graph G = (X, U). The set X of nodes is obtained by adding two nodes O and * to the set i of jobs: X = I ∪{O,*}, where, O is a job beginning and * a job end. The set U of arcs includes three sets: U = U_{1}∪U_{2}∪U_{3}. Let U_{1 }={(O,i) iI}; arc(O,i) is valued by r_{i} so that job i cannot start before the point in time r_{i}. Let U2 = {(i,*) iI}; arc (i,*) is valued by q_{i}+ p_{i }since job i has to spend an amount of time q_{i}+p_{i} in the system after its beginning of processing by the machine. Let U3 = {(i,j) job i precedes job j in the sequence}; arc(i,j) is valued by p_{i}; these arcs set the sequence. The aim is to find a sequence that minimizes the value of the critical path in the associated conjunctive graph.
Schrage algorithm (Carlier, 1982): In the Schrage algorithm the job
ready with greatest q_{i} is scheduled first. In this algorithm, U is
the set of jobs already scheduled and Ū is the set of jobs to be scheduled
and t is the time. The steps of this algorithm are shown in Fig.
3.
In Schrage algorithm, presented by Carlier(1982), when r_{i}<r_{j}
and q_{i}>q_{j} the algorithm can generate the optimal solution
and when r_{i}<r_{j} and q_{i}<q_{j} it
may result in weak solution (Wenqi and Aihua, 2004; Carlier, 1982). According
to the Step 2 of the Schrage algorithm, in each stage, only the ready jobs,
(r_{i}<=t) from set Ū, are sequenced. In the following we denote
these jobs by the set R. However, it is possible that there exist a job k (k
Ū)
so that r_{k}>t while the tail of job k (i.e., q_{k}) is
very larger than the tail of job i ( ∀ i
R). In this situation it is logically better to take into account job k in the
set R. This problem may lead to increase makespan. Therefore, we need to expand
the scope of the set R to all jobs in Ū so that the jobs with large tail
that are not ready have chance to be selected.

Fig. 3: 
Schrage algorithm 
Schrage algorithm with DS (Wenqi and Aihua, 2004): For solving a single
machine problem the Schrage algorithm is easy to be implemented, but the quality
of the solution that it produces usually leaves much room for improvement. Wenqi
and Aihua (2004) improved the Schrage algorithm and presented a more effective
algorithm with a disturbance (DS). They stated that in the Schrage algorithm
when the priority of r_{i}, p_{i}, q_{i} not respected
strictly, much better solutions are probably obtained. In the situation that
the value of r from a job is small while the value of q from this job is not
large enough, the algorithm might delay this job. Therefore, they applied a
disturbance (DS) on the priority rules based on Schrage algorithm. The steps
of Schrage algorithm with DS are shown in Fig. 4.
Here, δ is disturbance coefficient. The degree of disturbance that determines the performance of DS is affected by the value of the coefficient δ. So, the suitable selection of δ in DS is an important issue. The experiments show that if the value of δ is too small or too large, the performance of DS is poor (Wenqi and Aihua, 2004). The former makes the machines idle too frequently to get enough rewards and the latter makes DS degenerate into the Schrage algorithm. DS probably obtains the best solutions when the value of δ is between 1.0 and 6.0 (in their experiments, it is seldom over 6.0) and it tends to obtain the same solution with different values of δ.
The semantic of the Schrage algorithm with disturbance presented is correct,
i.e., in the situation that the value of r of a job is small while the value
of q of this job is not large enough, the algorithm might delay this job. In
this algorithm, in each stage, all of the jobs can be sequenced (i.e. R = Ū)
and the tail of the jobs that are not ready (their ready time (r_{i})
is larger than t), is adjusted by δx (r_{i}t). The decision criterion
for sequencing is u_{i}, where, u_{i} = q_{i}δx
(r_{i}  t) for r_{i} > t and u_{i} = q_{i},
for r_{i} < t. But there are three main problems in modeling of DS.
First, decision criterion u_{i} can not be sufficient alone for sequencing.
For example if we have job 1 with r_{1} = 10, p_{1} = 4, q_{1}
= 10 and job 2 with r_{2} = 15, p_{2} = 10, q_{2} =
41, by this algorithm; u_{1} = 10, u_{2} = 11 (assumed that
δ to be in extreme i.e., 6), then job 2 is sequenced earlier than job1.
This result is wrong and can be lead to increase the makespan. Because when
r_{2}>r_{1}+p_{1}, there is not any rational reason
to delay job 1. Therefore, we need a more comprehensive decision criterion to
resolve this important issue. Second, in each stage, all of the unscheduled
jobs are checked; this problem increases the computational efforts in the large
scale problems. Third, the quality of solution depends on coefficient δ
but there is not any semantic for selecting this coefficient. Also, if the coefficient
δ varies, then the computational effort will be high.

Fig. 4: 
Schrage algorithm with DS 
Modified Schrage algorithm: As stated in the previous section, an important area for improvement in the Schrage algorithm is considering the jobs with large tail even if they are not ready. It means that we must expand the set R by adding those jobs to the set. But the main problem rises which jobs can be added? In other words, in Schrage algorithm, if r_{i} < r_{j }and q_{i }> q_{j}, i is earlier sequenced j that is logically true and proved in literature (Carlier, 1982). But if r_{i} < r_{j }and q_{i} < q_{j} we deal with this challenge that which of them must earlier be sequenced. This problem is studied by the following theorem.
Theorem 1: In one machine sequencing problem, if there are two jobs
i and j with properties 0<=r_{i}<r_{j }and q_{i}<q_{j},
a necessary condition for sequencing j before i is (q_{j}>m+ q_{i}
and p_{i} > m) where, r_{j}t = m and p_{i} >= 0.

Fig. 5: 
Associated conjunctive graph 
Proof: We demonstrate the sequence of jobs with a conjunctive graph
G = (X, U) similar to Carlier algorithm (Fig. 5).
Let U be the set of jobs already scheduled, Ū be the set of all other jobs and c_{k} be the completion time of the jobs belonging to set U in stage k. job i is a job with minimum r in the set Ū. Let t be max (c_{k}, r_{i}). Assume that the Job j is a job with r_{j }so that r_{j }> t and q_{j }> q_{i}. We prove that a necessary condition for sequencing j before i is: q _{j}> m+q_{i} and p_{i} > m, where, r_{j}t = m and p_{i} > = 0. Here, there exist two options as follows:
Option 1: job i to be scheduled before job j
Let L_{i}^{1} be the length of path that pass through 0, 1,
i, * and L_{j}^{1} be the length of path that pass through 0,
1, j, * and L_{1} = Max (L_{i}^{1}, L_{j}^{1}).
Thus we have:
L_{1} = Max (t+p_{i}+q_{i},
t+p_{i}+p_{j}+q_{j}) 
(1) 
Option 2: job j to be scheduled before job i
Let L_{i}^{2}_{ }be the length of path that pass through
0, 1, i, * and L_{j}^{2} be the length of path that pass through
0, 1, j, * and L_{2} = Max (L_{i}^{2}, L_{j}^{2}).
Then,
L_{2} = Max (r_{j}+p_{j}+q_{j},
r_{j}+p_{j}+p_{i}+q_{i}) 
(2) 
Assume that job j is sequenced earlier than job i thus L_{2 }must be
lower than L_{1 }because the aim is_{ }to minimize the longest
path through 0 to *. Therefore, the formula (3) should be true.
Max (r_{j}+p_{j}+q_{j},
r_{j}+p_{j}+p_{i}+q_{i})< Max(t+p_{i}+q_{i},
t+p_{i}+p_{j}+q_{j}) 
(3) 
This means the formulae (4) or (5) should
be true:
Max (r_{j}+p_{j}+q_{j},
r_{j}+p_{j}+p_{i}+q_{i})< t+p_{i}+q_{i} 
(4) 
Max(r_{j}+p_{j}+q_{j},
r_{j}+p_{j}+p_{i}+q_{i})< t+p_{i}+p_{j}+q_{j} 
(5) 
The formula (4) is true if the formulae (6)
and (7) both are true.
r_{j}+p_{j}+q_{j}
< t+p_{i}+q_{i} 
(6) 
r_{j}+p_{j}+p_{i}+q_{i}<
t+p_{i}+q_{i} 
(7) 
The formula (5) is true if the formulae (8)
and (9) both are true.
r_{j}+p_{j}+q_{j}
< t+p_{i}+p_{j}+q_{j } 
(8) 
r_{j}+p_{j}+p_{i}+q_{i}
< t+p_{i}+p_{j}+q_{j} 
(9) 
Letr_{j}t = m ⇒ r_{j
}= m +t 
(10) 
By replacing (10) in the formulae (6),
(7), (8), (9), respectively
we have:
m +t+p_{j}+q_{j} <
t+p_{i}+q_{i} ⇒ q_{j} < q_{i}
+ p_{i} p_{j}m 
(11) 
m +t +p_{j}+p_{i}+q_{i}<
t+p_{i}+q_{i} ⇒ m +p_{j} <0 
(12) 
m +t+p_{j}+q_{j} <
t+p_{i}+p_{j}+q_{j} ⇒ p_{j}>m 
(13) 
m +t+p_{j}+p_{i}+q_{i
}< t+p_{i}+p_{j}+q_{j} ⇒ q_{j}>q_{i}+m 
(14) 
By the assumption: p_{j } >= 0 and m >= 0, thus, the formula (12)
is false. That means the formula (7) is false, then the formula
(4) is false all the times. The formulae (10)
and (11) by the assumption are true. That means the formulae
(8) and (9) are true. So the formula (5)
is also true all the times. Thus, the formula (3) is true that
means L_{2}<L_{1} and the theorem is proven.
If the job j to be critical i.e. belonging to critical path then as mentioned
above necessary condition for sequencing j before i is (q_{j} > m
+ q_{i} and p_{i} > m). But if the jobs j is not critical
then sequencing it before i can be lead to increase delay (r_{j}t)
and makespan. We define historically a condition for criticality of the job
j by following.
Proposition 1: In stage s and k
Ū
is a lower bound on the optimal makespan.

Fig. 6: 
Modified Schrage algorithm 
Proof: Carlier (1982) proved that for all I_{1} ⊆ I,
is a lower bound on the optimal makespan. Since Ū ⊆ I, therefore h(s) is a lower bound on the optimal makespan.
Now we present a modified Schrage algorithm according to theorem 1. the steps
of algorithm are shown in Fig. 6.
In this algorithm, in each stage, both ready jobs and the jobs with large tails (based on theorem 1) can be sequenced. This improves the result of sequencing. Also in spit of DS algorithm, all of the unscheduled jobs are not candidate and are limited by the necessary condition. This can decrease computational effort in each stage especially in large problems. Furthermore, this algorithm does not depend on any coefficient such as δ.
We coded the three algorithms: Schrage Algorithm (SA), Schrage algorithm with disturbance (DS) and Modified Schrage Algorithm (MSA), with MATLAB and have tested 1000 problems. For each problem with n jobs, 3n integers have been generated with uniform distributions between 1 and r_{max}, p_{max}, q_{max }(Carlier, 1982).
We set p_{max} = 50, r_{max} = q_{max}= n.k and examined
50 values for k and 20 values for n = 50, 100 …., 1000. The problems are
solved by the above three algorithms and enumerated the best solutions for problems
(minimum makespan). Table 1 shows the comparison of the best
solution between our algorithm and two others and the CPU time for running 1000
problems for each algorithm.
Of the 1000 problems, MSA gets the best solution in 880 cases. The results
show that MSA got better results than SA and DS. The CPU time in SA is less
than the other algorithms and DS spent too much CPU time. Since DS algorithm
in each stage, computes the measure u for the remaining jobs unscheduled and
checks all of them, thus in large scale problems it makes more computing efforts.
In the proposed algorithm using the proven theorem, computing the measure u
is limited to a few jobs and lead to decreasing CPU time and the complexity.
M.A.S gets much better solutions than DS and SA and can be ignored a few increasing
in CPU time respect to SA.
Table 1: 
Results of three algorithms on 1000 random problems 

Proposed Approach for job shop scheduling problem: The SB algorithm
sequences the bottleneck machine first while ignoring the remaining unscheduled
machines. After the machine is scheduled, the reoptimization procedure is triggered.
The SB repeats the single machine scheduling procedure until all machines have
been scheduled. subproblem solution procedure and reoptimization are two important
factor in SB approach. In the previous section, we presented an efficient heuristic
for subproblem solution that can decrease computational efforts. This study
also presents a new approach for omitting reoptimization in SB procedure.
In the proposed approach, the ready operations are sequenced first while the operations on critical machine should be preferenced. This can omit the reoptimization efforts. In our approach similar to SB, the job shop problem is decomposed to m single machines, but by the deferent way. In this approach, the ready operations on the same machine lie in a block and each of them is a single machine problem. In each stage, the operations in a block should be sequenced by a subproblem solution procedure, while the other unscheduled operations on the machine should be considered in the problem. If the operation on the block has not been prioritized, it would have delayed.
For the sake of simplicity, we define critical machine in each stage as follows:
Subproblem solution procedure is base on modified Schrage algorithm presented
in the pervious section. In this procedure, the prioritization is base on the
head and tail of the operations (u_{ij}), however as sated above, we
should prioritize the operations of critical machines. Criticality can be considered
as a fuzzy concept. All of the machine can be critical by a membership degree.
Membership degree of a machine can be calculated as follows:

Fig. 7: 
Subproblem solution procedure 
The delay on critical machine can increase makespan but in no critical machine
the operations can be delayed. We use this concept in our approach. In the proposed
approach, the criticality of machines takes in to account in subproblem solution
procedure (Fig. 7). The symbols used in this algorithm are
as follows:
N 
= 
No. of jobs 
M 
= 
No. of machines 
j_{i} 
= 
Job i 
m_{j} 
= 
Machine j 
o_{ij} 
= 
Operation related to job i on machine j 
U 
= 
Set of scheduled operations 
â 
= 
Set of unscheduled operations 
r_{ij} 
= 
The release time (head) of operation o_{ij} 
q_{ij} 
= 
The delivery time (tail) of operation o_{ij} 
p_{ij} 
= 
The processing time (tail ) of operation o_{ij} 
V_{j} 
= 
Set of the ready operations on machine j 
I_{j} 
= 
Set of unscheduled operations on machine j 
S_{ij} 
= 
Set of the successor operations of the operation o_{ij} 
The proposed algorithm for the job shop problem base on shifting bottleneck
is shown in Fig. 8.
Calculations of heads and tails are based on all conjunctive arcs and the fixed disjunctive arcs (Brucker et al., 1994).
For the sake of simplicity, we can define tail of operations as follows:
This approach generates a feasible solution for job shop problem. Then, we
can use reoptimization phase of SB approach for improving solutions.

Fig. 8: 
Proposed approach 
Table 2: 
Comparison of the proposed approach to SB for instance problems 

EXPERIMENTAL RESULTS
The algorithm is coded in MATLAB. We used a set of job shop scheduling problems
from benchmark problems (Fisher and Thompson, 1963; Adams et al., 1988;
Wenqi and Aihua, 2004). The proposed approach is compared by SB procedure. Comparisons
are base on two main factors; solution quality (makespan), CPU time as computations
efforts. The results of experiments are represented in Table 2.
Experimental results show that our method in small size until 6x6 gives us optimal solution. Also in large scale, the algorithm gives us good solutions. CPU time in our method is much lower than SB procedure while the makesapn of the proposed approach is comparable with SB. In sum, Experiment results show the superiority of our approach in comparison to SB in large scale problems especially in computational efforts. This approach can be provided a good seed in the iterative methods as a initial solution. Presenting hybrid approaches by iterative heuristics such as tabu search, simulated annealing and genetic algorithm can be future researches.
CONCLUSION
Subproblem solution procedure and reoptimization are two important factors in SB approach that can increase computational efforts. In large scale problems, we need effective procedures to decrease the computational efforts. This study, first, presents a modified Schrage algorithm for single machine scheduling problems with heads and tails that is an effective subproblem solution procedure. Then, it presents a heuristic approach for job shop scheduling that resolve reoptimization difficulties in SB. Finally, the proposed algorithm is tested and validated. Experiment results show the superiority of our approach in comparison to SB in large scale problems especially in computational efforts. This approach can be a good initial seed in using search techniques. Presenting hybrid approaches by iterative heuristics such as tabu search, simulated annealing and genetic algorithm can be future researches.