INTRODUCTION
In the years 1980, the enterprise is organized in activity centers different very distinct, each realizing one of the phases of the design cycle of the product as shown in Fig. 1.
From the schedule of conditions defined by the Methods' office or the marketing service, the engineering and design Office produces a model of the product. This model leads to the realization of a prototype having to be tested in order to evaluate the feasibility, the reliability and the approximate cost of the product. When the prototype gives satisfaction, a detailed study is launched which leads to the development of the execution or manufacturing drawings of the product.
The coordination between the various disciplines is ensured by a head of project. Indeed, the latter communicates with the various disciplines giving them a schedule of conditions and recovering a report of studies (calculation notes and execution plans).

Fig. 1: 
Sequential organization of the design cycle of a product 
Currently, the project leader does not exploit the knowledge which is transmitted
to him by the different activity centers of the enterprise by defect of specialized
tools in this function and is only satisfied to archive them, to compose the
markets and to launch the calls of offers for the submissions of enterprises.

Fig. 2: 
Organization of the concurrent engineering type 
It is also important to note that, in the current design’s cycle, there
is not any bond of communication between the different specialties intervening
in the design of the product.
Ultimately, each service works on the basis of information which is provided to it by the upstream service and this process is repeated until the design cycle end.
Although the sequential organization allows a circulation of the clearly definite and easily controllable flow of information, it presents many disadvantages which can call into question even the existence of the enterprise.
One of the principal defects of this organization is the bottleneck effect which can be generated with each blocking of one of the services of the enterprise. Indeed, if one of the services is late in the realization of its task, the whole cycle of development of the product which is slowed down.
The second failure that this sequential organization presents is related to the fact that each service is dependant on the service located upstream. This situation enormously reduces the margin of creativity of the services and can lead to a dead end especially for the services located in end of the cycle which must satisfy at the same time the objectives of all the other services and take account of the discipline rules related to their own environment.
Facing the increasingly strong competition, the companies were reorganized. They currently tend to parallel their activities in order to improve the factors cost, quality and time. It is accordingly that the concurrent engineering concept was born and that the sequential organization should be called into question (Proulx, 1992).
According to Dean and Unal (1992), the concurrent engineering consists in making the good persons cooperate at the good time in order to identify and solve the design problems. This approach is based on two fundamental ideas:
The parallelization of the services whose objective is to reduce the waste of time caused by the completion’s waiting of a file by a service for the work’s starting in the following service.
This new organization thus makes it possible to all the actors to begin simultaneously their work.
The installation of multidisciplinary teams to avoid the bulkheading of the services. The regrouping of specialists in the various services of the enterprise clearly decreases the errors’ risks in the development new products cycle since the teams enjoy a global vision of the constraints and rules to be respected.
Nevertheless, the practical application of such an organization is unrealizable for the major part of the enterprises since this solution leads to excessive operation over costs particularly because of the need for using of highly qualified and skilled staff.
In front of this reality, several authors proposed more pragmatic visions of
the concurrent engineering application. It was the case of Jagou (1993) who
is based on the diagram of Fig. 2 where the various services
are organized rather in cascades than a direct parallelization.
However, this new organization is not without risk. The control of information circulating between the various centers of activities is of a primary importance in the concurrent engineering. The enterprise must thus improve its information system to make it possible to each member of a project to make use, in the convenient time, of all the reliable information which is useful for him to progress in its study.
The management and the information control (knowledge) circulating between the various centers of activity are currently one of the major preoccupations of the enterprise contrarily to the linear organization which imposes only the control of its transfer.
The solutions’ using known as integrated brought only one partial solution to the information sharing problem (Schmolze and Lipkis, 1983; Eisenstadt, 1991; Reinders et al., 1991). Indeed, the regrouping of the tools’ functionalities of each service of the enterprise in the same software (KBS: Knowledgebased System) is an interesting approach from the view point of the data compatibility but in general, these tools do not take into account the information’s globality to be treated and are partitioned to a particular domain of the development cycle of a product. Moreover, these tools do not dispose any functionality allowing knowing, at any moment, the information’s holder and the advancement state of the study.
To provide these last functions, new tools then appeared on the market: Technical
Data Management Systems TDMS (Haton et al., 1991; Aussenac et al.,
1992; Gaines and Shaw, 1993). These systems make the information transfer safe
and facilitate the update of the shared data but do not allow a checking of
the coherence of the solutions.
In spite of this enormous advantage, these tools bring only one partial solution to the reduction of the risks related to the use of the concurrent engineering concept. In fact, they do not offer a function making it possible to guarantee the feasibility of a product throughout its cycle of development (i.e., design cycle).
Since a few years only, one new dataprocessing product has made its appearance on the software market. They are the Technical Knowledge Management Systems (Duizabo and Guillaume, 1996; Harani, 1997). Several knowledge representation techniques as well as associated reasoning mechanisms were developed in many research tasks (semantic networks, conceptual graphs, framebased representation and object oriented models). Each one of these techniques has brought interesting contributions in the knowledge representation field while presenting some insufficiencies particularly related to the coupling between the power of knowledge representation and the implementation requirements (modeling/programming). We note finally that each methodology resulting either, from the field of the knowledge engineering (cognitive engineering) or the software engineering, cannot satisfy our needs. This reason thus led us to develop a new modeling formalism better adapted to our problems.
Our contribution is to put, particularly at the disposal of the project leader representing the office of Methods studies and even the other specialties if they work in a network, an information processing system intended to improve the quality of products by the simultaneous taking into account data and constraints of the whole of the operators intervening in the design’s process.
Proposal for a methodology of resolution: However, this new apprehension of the process of design of products cannot succeed, in our opinion, without the undissociable application of the three techniques quoted previously namely the concurrent engineering, the knowledge engineering and the software engineering.
We insist on the complementarity’s relation which exists between them
in the resolution of complex problems. We show then the appropriateness of the
implementation of a multidisciplinary modeling resulting from their combination
in order to ensure the evolution of the solutions, their coherence and their
feasibility. We apply finally our information processing system to the optimal
design of the steel constructions. This study was realized to the production
genius laboratory of Tarbes (France) during the years 2004 to 2005.
Consequently, the platform known as multidisciplinary that associates at the
same time the qualities of the knowledge based systems and those of the object
oriented systems, does not only allow to solve the problems involved in the
persistence of the knowledge based models but also to make them evolutionary
at every moment. Ultimately, taking into account the multidisciplinary character
of the design process, it appears difficult to adopt a single formalism for
the algorithms of the various activity centers intervening in the development
of the same project. However, the original methodology that we propose allows
representing accurately the real cycle of the multidisciplinary design by the
means of a special treatment of the knowledge of each class resulting from the
conceptual modeling. By this powerful formalism, the operational model is confounded
with the conceptual model. Indeed, an interactive process of extractions and
reintroductions of the knowledge starting from an initial modeling of the system
makes it possible to regenerate new knowledge inside the classes. This approach
makes it possible to make at every moment the system evolutionary. Type’s
concepts test or point of view allow stopping in convenient time the iteration
mechanism which starts automatically. Then, the suitable footbridges ensure
interactively the transfer of this knowledge obtained between the various classes
of the model, which makes it possible to lead to a new model integrating this
time the multidisciplinary character since it takes account of the knowledge
of the whole of the system classes. Through this technique, the coherence of
the model is perfectly assured at every moment. Lastly, a system of filtering
allows to control, at each stage, the feasibility of the obtained models and
to select that which answers best to all the view points within the meaning
of certain criteria. Moreover, a judicious choice of the kind of the classes
specialty concept, the specificity of their attributes and the adapted treatment
procedures enabled us to circumvent the problems related to the coupling (knowledge
representation power/implementation requirements) encountered in the majority
of the knowledge based systems existing in the literature.
The multidisciplinary project: The multidisciplinary platform is a generic
framework allowing developing product optimization applications, starting from
parameters belonging to different disciplines intervening in the design of this
product.

Fig. 3: 
The multidisciplinary modeling 
This software platform not only allows to solve the treatment problem and the
knowledge transfer but also to make the tools specific to each discipline evolve
in an autonomous way, in particular the loops and the optimization algorithms,
by regarding the multidiscipline as a particular discipline.
The formalism of this methodology is represented in Fig. 3.
Development steps of an application
Choice of the context: We understand by context the whole of the choices
preliminary to the optimization, gathering the parameters which will not vary.
For example, the topology of the buildings (product) is fixed, the materials
are known, etc.
Construction of the objective functions and the optimization constraints: One will be able to define the objective functions by specialty, leading each one to as many loops of optimization. For the multidisciplinary objective functions, the only obligation is that they must call upon the same parameters as the objective functions of the concerned specialties. The constraints will be enunciated by specialty, resulting to the definition of the solutions’ space to be rejected by the algorithms. It will be most of the time, of the inequalities constraints of the threshold type; the physical coherence being ensured by the use of a tested model.
Identification of the usable algorithms: The optimization’s algorithms
depend on the chosen objective function, as well as design constraints.

Fig. 4: 
The OptiMétal platform 
In all the cases, it is a question of minimization under constraints (regulation
and norms in vigor). Currently, two algorithms are implemented:
• 
A pure Montecarlo’s algorithm based on a pulling to
the sort of a parameters’ set, followed of a repositioning at the point
corresponding to the minimal objective function, then of a new pulling,
etc. It may be that the new minimum is greater than the minimum of the preceding
iteration, in this case one proceeds to a resumption of pulling. The constraints
are satisfied by a simple filtering with the obtained solutions. The advantage
is the big robustness of this algorithm, which puts up with any form of
an objective function and allows taking into account the constraints easily;
the inconvenience is the high number of calculations and sometimes the impossibility
to reach the absolute minimum (one can only approach it). 
• 
A Levenbergmarquardt’s algorithm that allows stretching
towards the absolute minimum, but that requires a least squares type objective
function and whose the taking into account constraints by penalization is
more delicate. 
These algorithms can handle continuous parameters such as the diameter of a circular section bar, or discrete parameters such as the height of a standard metal section.
Numerical implementation: The presented application concerns the metallic
construction, for which we seeks an optimization gathering the points of view
of three disciplines: The Engineering and Design Department Steel structures,
The Engineering and Design Department Geotechnics and the Engineering and Design
Department Reinforced concrete.
We present the choices retained for the current version of the OptiMétal platform that is conceived from the Fig. 4.
The dotted lines show the relations between the knowledge files obtained by the system and the software corresponding to each discipline.
From the characteristics’ knowledge of the bars sections and the foundation
slabs resulting from the design parameters vector (MonteCarlo’s pulling),
the software of structure evaluates the contact actions and the maximum stresses
in the structure. The contact actions are then used by the computation software
of compressings. The interaction’s loop continues until the stop test;
the computation software of reinforced concrete then evaluates the reinforcement
steel weights and that of the foundations' concrete, starting from soil_ horizons.txt,
foundation_ slabs.txt and reactions.txt (last value of the interaction’s
loop). Then, the multidisciplinary software evaluates the construction cost
for the considered design parameters set. Lastly, the optimization’s loop
continues until the evaluation criteria satisfaction (Monte Carlo satisfaction
criterion).
Contribution of the methodology: To show the interest of the suggested methodology, we have adopted the step which consists in evaluating the same structure by two different methods:
• 
The first according to the traditional approach i.e., that
which corresponds to a sequential organization of the design cycle (It is
a question of classical optimization or monodisciplinary). 
• 
The second using the tool that we implemented and which corresponds
to a new concurrent engineering typed organization (It is a question of
multidisciplinary optimization). 
In the first approach, we first optimize each discipline separately and then
we evaluate the construction as a whole.
Table 1: 
Comparative study of the two methodologies 


Fig. 5: 
3D structure with circular sections 
We stopped, for a simple threedimensional structure using continuous typed
design parameters (Fig. 5), the optimization process after
7 calculations of 2 iterations each one because we estimate that convergence
is sufficient since the Von Mises' constraints and the minimum variation are
stabilized at values considered to be acceptable (on the whole, there were 126
MonteCarlo pullings).
On the other hand, in the second approach, the information processing system suggested will evaluate directly and totally the construction. It should be noted that each iteration of MonteCarlo required 3^{4} = 81 pullings (i.e., 243 calculations on the whole for this structure).
The analysis of the two methods gives the synthesis Table 1.
The contribution of this methodology can be represented by the ratio of the costs generated by each of the two approaches:
that is to say roughly a profit of 57%.
CONCLUSION
It thus appears that this approach is very promising and must be refined by a certain number of measurements before the validation of the obtained results. Indeed, in the sake of objectivity, the magnitude’s order of the cost’s profit noted on this sample cannot be the subject of a generalization for the simple reason that our tool is to its first version and it thus does not integrate all the optimization’s constraints related to the eurocodes as regards calculation of the resistance, the stability and the dimensioning of constructions.
In the final version phase, a vast numerical simulations program on various typologies can allow to elaborate specialty rules which will be used to help, in an early phase, the designers towards total optimized solutions.
This study task opens broader prospects since it is registered to the interface
of four types of scientific concerns namely: Design of computerized decisionmaking
systems (expert systems), the modeling and the management of the complex systems
(knowledge engineering), the multiagent or Multidisciplinary systems (concurrent
engineering) and finally the multicriterion optimization. It thus opens an interesting
prospect in all the fields related to the Multidisciplinary such as the aeronautics,
the Naval Aviation, the automobile manufacture, etc.