INTRODUCTION
Variable Neighbourhood Search (VNS) is an improvement metaheuristic that has
been proposed by VNS deals with several neighbourhood structures and therefore,
the decision to apply which neighbourhood at the current iteration in crucial
(Thompson and Dowsland, 1995), due to the fact that
different neighborhood structures generate different landscapes. Literature
shows that VNS has been applied to solve challenging optimization problems such
as course time tabling.
However, up to date, there has been no work undertaken to solve course time
tabling problems by applying an adaptive search in VNS. Motivated by the above,
this work proposes an adaptive learning mechanism in the VNS algorithm in order
to guide the VNS algorithm to choose the right neighborhood structure during
the search. The proposed approach is tested over the Socha course time tabling
datasets (Socha et al., 2003). Results demonstrate
that the adaptive learning mechanism in the VNS can enhance the performance
of the AGVNS to obtain better solutions (compared with other variants of VNS)
for the course time tabling problem.
PROBLEM DESCRIPTION
University course time tabling problems can be defined as assigning a given
number of courses to a given number of timeslots and rooms subject to a set
of hard and soft constraints. This work used the Socha datasets (Socha
et al., 2003) which are modeled as follows:
• 
A set of C courses c_{i }(i = 1,…,C) 
• 
t_{n} represents the set of timeslots (n = 1,…,45) 
• 
A set of R rooms r_{j} (j = 1,…, R) 
• 
A set of F room features 
• 
A set of M students 
The course time tabling problem consists of assigning every course c_{i}
to a timeslot t_{n} and room r_{j} so that the following hard
constraints are satisfied:
• 
No student can be assigned to more than one course at the
same time 
• 
The room should satisfy the features required by the course 
• 
The number of students attending the course should be less than or equal
to the capacity of the room 
• 
No more than one course is allowed in a timeslot in each room 
The objective is to satisfy all hard constraints while minimizing the number
of students involved in the violation of soft constraints. The soft constraints
are equally penalized (penalty cost = 1 for each violation per student). The
soft constraints are:
• 
A student should not have a course scheduled in the last timeslot
of the day 
• 
A student should not have more than two consecutive courses 
• 
A student should not have a single course on a day 
Table 1 presents the characteristics of the Socha datasets
(Socha et al., 2003). The datasets consists of
11 problem instances which are categorized as small, medium and large.
VNSBASIC FOR COURSE TIME TABLING PROBLEMS
Contrary to other local search methods, VNS does not follow a trajectory but
explores increasingly distant neighborhoods of the current incumbent solution
and jumps from this solution to a new one if and only if an improvement has
been made (Hansen and Mladenovic, 2001). Figure
1 shows the pseudocode of the basic VNS (Hansen and
Mladenovic, 2001).
Figure 2 shows the pseudocode of the VNSBasic for solving
course time tabling problems. In this work, the initial solution is produced
using a Largest Degree Graph.
Coloring heuristic: The feasible timetable is obtained by adjusting
appropriate courses in the schedule based on room availability and other hard
constraint violations until all the hard constraints are satisfied without taking
into account any of the soft constraint violations.
This study used the following neighborhood structures:
• 
(N1) Swap: Select two timeslots at random and simply
swap all the courses in one timeslot with all the courses in the other timeslot
while maintaining a feasible timetable 
• 
(N2) Swap: Select any last timeslot of the day and another single
timeslot (exclude the last timeslots of the day) and simply swap all the
courses in the last timeslot with all the courses in the other timeslot
while maintaining a feasible timetable 
• 
(N3) Move: Select any 5 events from any last timeslot of the day
and move it to a new free timeslot (exclude the last timeslots of the day)
while maintaining a feasible timetable 
• 
(N4) Swap: Select any 2 events at random and simply swap their
timeslot while maintaining a feasible timetable 
• 
(N5) Move: Select any 5 events from any timeslot and move it to
a new free timeslot (exclude the last timeslots of the day) while maintaining
a feasible timetable 

Fig. 1: 
Steps of the basic VNS 

Fig. 2: 
Steps of the VNSBasic for solving course time tabling problems 
AGVNS ALGORITHM PROPOSED FOR COURSE TIME TABLING PROBLEMS
The basic idea of AGVNS is to guide the VNS algorithm to choose the right
neighborhood structure at the right time. Therefore, the adaptive learning mechanism
is introduced. With this mechanism, the algorithm memorizes which neighborhood
structure could effectively solve the specific soft constraint violations. Then,
the algorithm uses the memory information as a guide for selecting the right
neighborhood structure to enhance the quality of the best solution. For this
purpose, AGVNS is divided into two phases. Phase I is for learning while Phase
II is for improvement.
Phase I: Learning: In this phase, the algorithm memorizes which neighborhood
structure could effectively solve the specific soft constraint violations. For
this purpose, the memory is divided into three parts (which represents the three
soft constraints S_{i }where i = 1, 2, 3) and each part contains five
spaces (representing the neighborhood structures) to keep a total of the reduction
values of the soft constraint violations.
This phase begins when the algorithm generates a solution x’ by applying
N_{k }(where k = 1) to the initial solution x^{0}. If the solution
x’ is better, the algorithm will identify which soft constraint violations
have reduced. The reduction value is calculated and updated in the memory. After
that, the algorithm repeats the same steps by applying the neighborhood structure
N_{k }(where k = 2 until k = k_{max}). The procedures are repeated
until the stopping condition for the Phase I is met.
For example, given an initial solution x^{0}, where f(x^{0})
= S^{0}_{1}+S^{0}_{2}+S^{0}_{3}
and after applying neighborhood structure N_{k }(where k = 1), an incumbent
solution x’ is generated where f(x’) = S’_{1} + S’_{2
}+ S’_{3}. If f(x’) is less than f(x^{0}), then
the algorithm will identify which soft constraint violations was/were reduced.
For example, if S’_{1}<S^{0}_{1}, the reduction
value is calculated. Therefore, the reduction value of the soft constraint violations
(d) = S^{0}_{1}S’_{1}. Then, the value d is updated
in the memory at the space 1 (which represents the first neighborhood structure)
in part 1 (which represents the soft constraint 1 (S_{1}).
Table 2 illustrates an example of the memory contents after
the stopping condition of learning phase (phase I) has been met. Based on the
table, the highest total of the reduction value for the soft constraint 1 (S_{1})
is 123 has been obtained after applied neighborhood structure 1 (N_{1})
to the solution x^{0}. This means that N_{1 }is the most appropriate
neighborhood structure to be selected to solve soft constraint 1 while simultaneously
enhance the quality of the solution x” (in phase II). N_{2}, on
the other hand, is the most appropriate neighborhood structure to be selected
to solve soft constraint 2 (S_{2}) and soft constraint 3 (S_{3}).
This is because the highest total reduction value for soft constraint 2 (S_{2})
is 88 has been obtained after applied the neighborhood structure 2 (N_{2})
to the solution x^{0} and the highest total the reduction value
for the soft constraint 3 (S_{3}) is 78 also has been obtained after
applying neighborhood structure 2 (N_{2}) to the solution x^{0}.
This information will be used in the phase II.
Phase II: Improvement: In this phase, the algorithm uses the memory
(from Phase I) as a guide for selecting the most appropriate neighborhood structure
to enhance the quality of a best solution. For the first iteration, the initial
solution x^{0} is set to be the best solution x_{best}.
This phase starts when the algorithm generates a solution x” by randomly
applying any of the neighborhood structures (N_{k}) to the best solution
x_{best}. Then, the algorithm identifies the highest soft constraint
violations in the solution x”. Subsequently, the algorithm uses the memory
information from Phase I to select the most appropriate neighborhood structure
to solve the soft constraint violations to enhance the quality of the solution
x”.
Table 2: 
Example of the memory contents after the stopping condition
of learning phase (phase i) has been met 


Fig. 3: 
Pseudo code of the AGVNS 
For example, if the highest soft constraint violations in the solution x”
is S_{1}, the algorithm will refer to the memory to select the most
appropriate neighborhood structure to solve the soft constraint 1 while enhancing
the quality of the solution x”. Based on the Table 2,
the highest total reduction value for S_{1 }is from N_{1}. Therefore,
N_{1} is used to enhance the quality of the solution x”. If it
turns out that the solution x* produced by that step is better than the best
solution x_{best}, the best solution x_{best} is updated by
the solution x* and the memory is updated by repeating step (i) to step (iii)
as in the first phase.
In instances where solution x* is found to be worse than the best solution
x_{best}, the algorithm will apply the neighborhood structure N_{p
}to the solution x”. If the solution x** produced from that step is
better than the solution x”, then the solution x” is updated by the
solution x** and the memory is updated by repeating step (i) to step
(iii) as in the first phase. Otherwise, the algorithm continues the search with
N_{p }(where p7p+1). These steps are repeated until the solution x**
produced is better than the solution x” or p = p_{maz}. Finally,
if the solution x” is better than the solution x_{best}, the solution
x_{best} is updated by the solution x”. The algorithm proceeds
to the next iteration until the stopping condition for the phase 2 is met. Figure
3 shows the pseudocode of the AGVNS.
RESULTS AND DISCUSSION
In order to assess the effectiveness of applying learning mechanism in the
VNS, the same neighborhood structures are used to compare AGVNS with VNSBasic.
The stopping conditions for the AGVNS that are used in phase I and phase II
are 2000 and 100,000 iterations, respectively. These values were determined
based on preliminary experiment. Table 3 presents the average
penalty cost for 20 runs.
It is clear from Table that AGVNS outperforms VNSBasic. This is due to the
use of the adaptive learning mechanism strategy to guide the algorithm to choose
the right neighborhood structure at the right time. The performance of AGVNS
has been compared with other approaches in the literature. Table
4 presents the comparison results. These are the works that has been compared
in this experiment:
Table 3: 
Comparison results on course time tabling Problem between
vnsbasic and agvns 

Table 4: 
Comparison results on course time tabling problem between
agvns and other approaches 

It can be seen that across all problem instances, AGVNS has produced much
better results when compared against other approaches in literature. For the
five small instances, AGVNS is able to solve them to optimality. AGVNS results
for medium1, medium3, medium5 and large instance are better than the results
obtained by other approaches. These demonstrated that AGVNS outperformed other
approaches in the literature. These results demonstrated the effectiveness of
applying the adaptive learning mechanism in VNS.
CONCLUSION
This study have proposed the AGVNS algorithm for solving course time tabling
problems. The basic idea of the AGVNS is to guide the VNS algorithm to choose
the right neighborhood structure during the search. Therefore, the adaptive
learning mechanism was introduced to memorize which neighborhood structure can
effectively solve specific soft constraint violations. The recommended neighborhood
structure (from the learning phase) is used to enhance the quality of solutions
in the improvement phase. Results have demonstrated that the AGVNS produces
better solutions for course time tabling problems and outperforms other stated
approaches. This finding shows the effectiveness of the adaptive learning mechanism
in the VNS algorithm in order to guide the VNS algorithm to choose the right
neighborhood structure at the right time.
For future work, the AGVNS will be hybridized with Honey Bee Mating Optimization
(HBMO) to investigate the effectiveness of the AGVNS in HBMO.
ACKNOWLEDGMENT
The authors wish to thank Ministry of Higher Education for supporting this
work under the FRGS Research Grant Scheme (FRGS/1/2012/SG05/UKM/02/11).