Driemaandelijks publikatie van de Faculteit der Economische en Toegepaste Economische Wetenschappen van de Katholieke Universiteit Leuven, uitgegeven m e t medewerking van Ekonomika, vereniging van afgestudeerden van de faculteit, uitgegeven m e t de steun van het Ministerie van de Vlaamse Gemeenschap Departement Onderwijs.
REDACTIE: Hoofdredacteur: Prof. Dr. P Van Cayseeie Kernredactie: de Professoren F. Abraham, G. De Bruyne, Z. Degraeve, P. De Grauwe, M . Dekimpe, D. Heremans, C. Lefebvre, P Sercu, F. Spinnewyn, C. Van Hulle, R. Veugelers en Mevr. A. Gaeremynck en de Heer H. Dewachter. Redactieraad: Deze raad omvat naast de hoofdredacteur en de leden van de kernredactie eveneens: professoren P Beghin (R.U.Gent), H. Daems (K.U.Leuven), R. De Bondt (K.U.Leuven), M . Dombrecht, (Nat. Bank, Brussel), S. Proost (K.U.Leuven), E. Schokkaert (K.U.Leuven), W. Vanthielen (E.H. Limburg), J. Vuchelen (V.U. Brussel). Redactiesecretariaat: A. Ronsmans Tijdschrift voor Economie en Management Naamsestraat 6 9 , 3000'LEUVEN Tel. 016132.66.88 - Fax. 016132.66.10 Verantwoordelijke uitgever: P Van Cayseele, Zemstbaan 1 8 0 , 2 8 0 0 Mechelen. De Redactie beoordeelt de kwaliteit van de bijdragen maar kan niet verantwoordelijk gesteld worden voor de inhoud. The Editorial Board judges the quality of the contributions published but can not be held responsible for their content. ABONNEMENTSVOORWAARDEN: Volledige jaargang: 4 afleveringen (ca. 5 0 0 blz.) 2 . 0 0 0 BF als steunabonnement* 1 . 2 0 0 BF als institutioneel abonnement (bibliotheken, instellingen, bedrijven) 8 0 0 BF als persooniijk abonnement (vooraf te betalen via persooniijke rekening) 5 0 0 BF als student 1 . 5 0 0 BF of $ 4 0 als buitenlands abonnement 3 5 0 BF als 10s nummer
* De lijst van steunabonnementen wordt eenmaal per jaar gepubliceerd in het Tijdschrift. Abonnementen en bestellingen worden uitsluitend vereffend op PR. 000-0112553-33 Tijdschrift voor Economie en Management, Leuven.
KATHOLIEKE UNlVERSlTElT TE LEUVEN FACULTEIT DER ECONOMISCHE EN TOEGEPASTE ECONOMISCHE WETENSCHAPPEN
Kwantitatieve metkoden nieuwe methoden en toepassingen Advertenties lnleiding Y. DIRICKX Curriculum Vitae Prof. Dr. Franz Van Winckel Het deterministisch assortimentsprobleem W . GOCHETEN M.VANDEBROEK Recent Developments in Integer Linear Programming Solve Managerial Decision Problems Z. DEGRAEVE Het Traveling Salesman probleem toegepast o p Order Picking EN D. HEEREMANS L. GELDERS Resource-Constrained Project Scheduling: a View on Recent Developments AND E.L. DEMEULEMEESTER W.S. HERROELEN Queueing Theory and Operations Management M . LAMBRECHT AND N. VANDAELE Statistical Failure Prevision Problems Y. DIRICKXAND G. VAN LANDEGHEM Advertenties Boekbesprekingen Eindverhandelingen Publikaties F.E.T.E.W.
DECEMBER 1994 DRIEMAANDELIJKS
4
JAARGANG XXXIX
Robert Vertonghen en Chris Lefebvre
Handboek voor onderwijs en praktijh In dit handboek over vennootschapsboekhouden wordt eerst een overzicht gegeven van de Belgische wettelijke verplichtingen met betrekking tot de boekhouding en de jaarrekening van de ondernerningen. Vervolgens wordt aandacht besteed aan het algemene rekeningenstelsel en aan de jaarrekening van de ondernemingen. Na dit algemene overzicht volgt een grondige studie van de handelsvennootschappen. Voor elke vennootschapsvorm wordt de nodige boekhouding betreffende de oprichting, de werking en de vereffening ervan gegeven. In een afzonderlijk nieuw hoofdstuk wordt ingegaan op de juridische en boekhoudkundige aspecten van fusie en splitsing van vennootschappen. Het boek eindigt met een hoofdstuk over geconsolideerde jaarrekeningen. In deze vierde druk wordt onder meer rekening gehouden met de diverse wijzigingen die voortvloeien uit de wet van 1 9 juni 1993 tot wijziging, wat de fusies en splitsingen betreft, van de wetten op de handelsvennootschappen, gecoordineerd op 3 0 november 1935 en het koninklijk besluit van 3 december 1993 tot wijziging van de koninklijke besluiten van 8 oktober 1 9 7 6 m.b.t. de jaarrekening van de ondernemingen, van 1 2 september 1983 tot bepaling van de minimum indeling van een algemeen rekeningenstelsel en van 6 maart 1990 op de geconsolideerde jaarrekening van de ondernemingen. ISBN 90-334-3119-X
17,5
X
25 cm, 495 blz., 1.480 BF
Informatie en distributie:
Uitgeverij Acco Tiensestraat 134, 3000 Leuven Tel. 016/29.11.00 - Fax 016/20.73.89 Het boek is ook verkrijgkaar via de boekhandel
e
1
Eigen en quasi-eigen vermogenfinancieringen onder de vorm van tijdelijke participaties met terugkoopoptie, onderhandse obligatieleningen of leningen op vaste termijn F~nanc~ele herstructureringen
Controlewijzigingen : familiale opvolgingsproblemen, verzelfstandiging via management buy-out en buy-in, fusies en overnames e
Bescherming minderheidsbelangen
INVESTCO Regentlaan 54 Box 2 B l000 Brussels - Tel 021513 45 20 Fax 021513 97 41
6
Tijdschrift voor Economle en Management Vol. XXXIX. 1.1991
Het is zonder twijfel 2 0 dat inet de benoeming van Franz Van Wineke1 aan de Faculteit Economische en Toegepaste Economische Wetenschappen in 1958 het ondenvijs in Operations ResearchIManagement Science (OR) en de kwantitatieve methoden in het algemeen meteen een hoge vlucht nam. Duizenden studenten hebben zich sindsdien kunnen verdiepen in simplextabellen, in recursieve vergelijkingen, in de fenomeilen van de wachttijden, enzomeer. Toen werd de O R beschouwd als een futuristische wetenschap die het bedrijfsbeleid zou gaan reduceren tot modelmatige excercities. Dit is niet gebeurd. Ten eerste, omdat de methoden van de O R niet konden geimplementeerd worden wegens de randvoolwaarden opgelegd door de toen bestaande rekenapparatuur. Ten tweede, en belangrijker, omdat men geleidelijk aan is gaan inzien dat management ineer is dan het ontwikltelen van procedures gebaseerd op kwantitatieve modellen. Nu wordt d e OR beschouwd als een "toolkit" van technieken die kan gebruikt worden bij de beleidsondersteuning, daarbij gebruik makend van geavanceerde software en complexe databestanden. In dit speciaal nummer is een poging ondernomen om de lezer een indruk te geven van d e recente ontwikkelingen van het gebruik van mathematische/statistische technieken. Bij het doornemen van d e artikels valt ineteen de diversiteit van d e toepassingsgebieden op. Gochet-Vandebroek (Wet deterrninistisch assortiinentsprobleein) rapporteren in hun artikel over het assortimentsprobleem hoe, grosso modo, mathernatische programmering kan bijdragen tot beheersing van het materiaalverlies door het optimaal bepalen van de "standaardmaten" van een produkt-assortiment. Het artikel bevat eveneens numerieke resultaten. In Gochet-Vandebroek werd a1 opgemerkt dat het oplossen van geheeltallige lineaire programmeringsproblemen geen sinecure is. Degraeve (Recent Developments in Integer kinear Programming Solve Managerial Decision Problems) neemt deze draad op en laat zien hoe de ontwikkeling van gestructureerde modeleertalen uitweg biedt in bepaalde klassen van probleinen. Hiermee wordt duidelijk dat inoderne O R ook stoelt op ontwikkelingen binnen d e computerwetenschappen. Tenvijl in Degraeve de methoden van geheeltallige prograinmering ineer in het algemeen worden behandeld, krijgt de lezer in Gel-
ders-Heeremans (Het Traveling Salesman probleem toegepast op Order Picking) een originele toepassing van het beroemde handelsreizigersprobleem op een logistiek probleem namelijk order picking. Het order picking probleem kan "bijna" beschreven worden als een handelsreizigersprobleem. Met een bijkomende "heuristiek slagen Gelders en Heeremans erin dit praktisch probleem een oplossing te geven. De bijdrage van Herroelen-Demeulemeester (Resource-Constrained Project Scheduling) handelt eveneens over een toepassingsgebied in de productiesfeer: resource-constrained project scheduling. De alom gekende netwerkplanningstechnieken CPMIPERT worden hier drastisch veralgemeend. De huidige stand van zaken wordt grondig doorgelicht. Wachttijdentheorie is een discipline waar toegepaste (of zelfs minder toegepast) wiskundigen hun lusten kunnen botvieren. De literatuur - dit is een understatement - is uitermate uitgebreid. Wachten is een vervelend aspect in het leven in het algemeen maar in een produktieomgeving betekent dit simpelweg weggooien van geld. Lambrecht-Vandaele (Queueing Theory and Operations Management) benaderen drie aspecten van capacity management vanuit het perspectief van de wachttijdentheorie (gelukkig voor de lezer op een niet-wiskundige manier). Het artikel opent ruime perspectieven voor een beter begrip van de implicaties van de JIT-revolutie. Wachttijden is stochastiek; stochastiek en statistiek zijn nauw verwante disciplines. Statistiek heeft evenmin eel1 reputatie van eenvoud en toegankelijkheid. Dirickx-Van Eandeghem (Statistical Failure Prevision Problems) plaatsen het probleem van het voorspellen van faillissementen in een algemeen kader en tonen aan hoe statistische methoden (classificatiemethoden) op een experimentele wijze licht kunnen brengen waar in feite geen sluitend theoretische onderbouwing aanwezig is. Failure prevision is een belangrijk probleem in de financiele literatuur, statistiek blijkt een handig analyse-instrument te zijn. D e toolkit van de OR-onderzoeker is breed, de toepassingsmogelijheden legio. Franz Van Winckel zal aan het begin van zijn carrikre wellicht niet hebben kunnen overzien (evenmin als anderen) welke vlucht zijn wetenschap heeft genomen. Wet besef dat de auteurs van de artikels tot zijn studenten kunnen gerekend worden zal zijn leesgenot wellicht verhogen en hem overtuigen dat het niet nutteloos was. De lezer wens ik veel leesgenot. Uvo M.I. DIRICKX
Tijdschrift voor Economie en Management Vol. XXXIX, 4, 1994
CURRICULUM VITAE
1. PERSONAL Van Winckel Franz Predikherenberg 34 3020 Herent Age: 65 years 2. EDUCATION -
Mining Engineer 1953 Chemical Engineer 1954 Applied Economics 1955 (- all K.U.Leuven)
-
laureate prize KVIV laureate prize Union Minigre
3. EXPERIENCE Professor at K.U.Leuven (Mathematics, Operations Research and Production Management). Courses at the Faculty of Economics and Applied Economics and the Faculty of Applied Sciences - visiting professor University of Chicago, - full academic year: 1967-1968 - period of 6 weeks: 1975 and 1986 -visiting professor University of Stellenbos: 1978,1981,1983 and 1987 - visiting professor Saint Louis University - Baguio City: 1990, 1992, 1993 - chairman of the Department of Applied Economics, K.U.Leuven: 1968-1972
-
dean of the Faculty of Econolnics and Applied Economics, K.U.Leuven: 1974- 1978 - president of VVE: 1981-1982 - member and executive director of CORE for K.U.Leuven - scientific director at SEMA and Van de Bunt - Nederland - chairlna~landlor lecturer at various scientific congresses and universities: Cornell University, University of Chicago, MIT, Stanford, Universities of Capetown and Stellenbos, Universities of Berlin, Eindhoven, Gent, KUBrussel, VUBrussel, Rotterdam, kille, UFSIA etc ..., scientific congresses of VEV and KVIV - author: "taktiek en strategie van het voorraadbeleid"; "lineaire programmatie en aanverwante methoden" en programlnatuur "LINSE"; "wiskunde voor economisten": 'iYiskunde voor toekomstige economisten"; "queueing theory and applications" and "software Wline"; "wachtlijnen en simulatie" en programmatuur "WACHT". -
Tijdschrift voor Economie en Management Vol. XYXIX. 4, 1994
Het determinidisch assortimentsprobleem door W. GOCHET*:en M. VANDEBROEK"'
I. INLEIDING H e t nut van een produkt hangt vaak af van een bepelkt aantal kritieke attributen. Ideale waarden voor deze attributen bestaan meestal niet omdat ze afl~angenvan gebruiker tot gebruiker. Zo hebben klanten bij de keuze van een wagen verschillende voorkeuren i.v.m. d e attributen vemzogerz varz de motor; kofferinlzo~~d e11 remsysteenz. Kopers van ijskasten verschillen m.b.t. het attribuut znhoud, kopers van schroeven m.b.t. het aar?tal schroeveiz in de verpakking en kopers van kledingstukken zoeken naar de j~listenzaal. De cornbinatie van waarden die de verschillende attributen van een produkt hebben, wordt verder aangeduid als de mast van het produkt. Indien het produkt gekenmerkt wordt door 66n attribuut, dan is de maat d e waarde van dit attribuut en is de maat 66ndimensioneel. Indien de maat een combinatie van waarden van meerdere attributen betreft, dan is de maat meerdimensioneel. Afhankelijkvan d e omstandigheden en het tijdstip van verwerving kan eenzelfde gebruiker behoefte hebben aan verschillende maten van een produkt. Indien een bedrijf bijvoorbeeld dozen gebruikt als verpakkingsmateriaal, dan heeft men op dag 1dozen nodig met inhoudx, of, meer specifiek, dozen met afmetingen 1, x b , x h,, tenvijl op dag 2 dozen nodig zijn met inhoudx, of afmetingen 1,x b,x h,. Voor de gebruiker van de dozen is het kritieke attribuut de inhoud van de dozen en deze gewenste inhoud kan varieren. De begrippen gebrz~ikeren anrzbuz~tmoeten in de ruimste betekenis geinterpreteerd worden. De gebruiker kan een gewone consuDepartcment Toegepnste Econonl~scheWetenschappen, K U Leuven
ment zijn maar kan ook het bedrijf zijn dat bijvoorbeeld metalen platen produceert met verschillende lengte, breedte en dikte (= attributen) en deze platen verder venverkt in een produktieproces. In dit laatste geval is hetzelfde bedrijf zowel producent als gebruiker. Geconfrontreerd met de vraag naar een groot aantal verschillende maten van een produkt, zal een producent niet noodzakelijk elk van deze maten aanbieden maar zich beperken tot een aantal standaardmaten. De redenen hiervoor liggen voor de hand: het vermijden van te hoge produktiekosten, omsteltijden, ontwerpkosten, voorraadkosten, een te logge voorraadadministratie, enz. Een gevolg hiervan is dat de gebruiker beperkt wordt in zijn keuze en wellicht niet zijn ideale maat in het aanbod terugvindt. Mij kan hierop reageren door het produkt af te wijzen of door zich aan te passen en zijn keuze te maken uit de aangeboden standaardmaten. In dit !aa.tste geval za! in de hieronder beschreven modellen rekening gehouden worden met een substit~ltiekost,die zowel geldt voor de producent als voor de gebruiker maar verschillend kan zijn voor beiden. Z o zal voor de producent de substitutiekost vaak Sestaan uit een verlies aan goodwill; voor de gebruiker neemt deze kost diverse vormen aan: materiaalverlies omdat een te grote standaardmaat moet wordenversneden tot de gewenste afmeting(en), ongemakken bij het niet perfekt passen van een kledingstuk, enz. Meestal zal de producent beslissen welke standaardmaten worden aangeboden, soms echter kiest de gebruiker de standaardmaten. Onderstel bijvoorbeeld dat in het hoger aangehaalde voorbeeld van de verpakkingsdozen de gebruiker tientallen of honderden verschillende afmetingen van dozen nodig heeft. In plaats van dit groot aantal verschillende afmetingen aan te kopen, wordt een beperkt aantal standaardmaten in grote hoeveelheden besteld en wordt eventueel een iets grotere doos gebruikt indien de gepaste maat niet voorhanden is. Dit brengt overtollige kosten mee (overbodig karton, te grote dozen) maar deze extra-kosten wegen wellicht niet op tegen de voordelen als gevolg van verminderde voorraadkosten en vereeiavoudiging van voorraadverhandeling en administratie, eventueel kortingen bij de aankoop van grote hoeveelheden met dezelfde afmetingen, enz. Met assortinzentsprobleern behandelt de keuze van de standaardmaten door de producent of de gebruiker waarbij (venvachte) kosten worden geminimaliseerd of (venvachte) opbrengsten worden gemaximaliseerd. Deze keuze impliceert uiteraard ook het bepalen van het aantal standaardmaten die zullen worden aangeboden of gebruikt.
H e t probleem kan deterministisch of stochastisch worden opgevat: in het eerste geval wordt de vraag naar de verschillende (ideale) maten gekend verondersteld tenvijl in het stochastische geval hiervoor slechts een kansverdeling beschikbaar is. In beide gevallen kunnen d e ideale maten continu of discreet in een interval gespreid liggen. Bovendien kan het probleem als een kenperiode of een meerperioden probleem worden gesteld. In het Unperiode probleem wordt een assortiment van standaardmaten gekozen in het licht van een eenmalige gekende of stochastische vraag. Bij een stochastische vraag is er het bijkomende probleem dat er eventueel niet gebruikte produkten zijn die extra kosten opleveren. Bij het meerperioden probleem kan dit worden opgevangen door voorraadvorming expliciet in het model op te nemen. Uit bovenstaande beschouwingen en de voorbeelden hoger aangehaald, zal duidelijk zijn dat kin allesomvattend model voor het assortimentsprobleem niet bestaat. In elke concrete situatie zal moeten onderzocht worden welke modelformulering het meest geschikt is. Hieronder zal vooral het deterministische k6n- en tweedimensioneel probleem over kin periode aan bod komen. Dit omvat concrete problemen waarbij bijvoorbeeld een producent via een orderboek de bestellingen kent of waarbij een gebruiker de door hem vereiste maten vrij nauwkeurig kan inschatten. Tenslotte wordt opgemerkt dat in een aantal praktische situaties het assortimentsprobleem niet 10s kan behandeld worden van het beter gekende versnijdingsprobleem. Bij de keuze van glazen, metalen of kartonnen standaardplaten is het namelijk vaak mogelijk om een standaardplaat niet enkel te gebruiken voor 66n (kleinere) gevraagde dimensie maar ook een stal~daardplaatte versnijden tot meerdere kleinere gevraagde afmetingen. Het is duidelijk dat in dergelijke gevallen de keuze van het assortiment standaardplaten afhankelijk is van de mogelijke versnijdingspatronen. Het a1 complexe probleem van de keuze van de standaardmaten wordt hierdoor nog aanzienlijk ingewikkelder en er zal in deze bijdrage verder geen aandacht aan besteed worden.
IT. HET DETERMINISTISCH EENDIMENSTONEEL PROBLEEM
De lneest eenvoudige vorrn van het assortirnentsprobleem wordt bekolnen indien slechts een attribuut van het produkt van belang is. Stel dat de maten X,, s ,...,X, nodig zijn met vraag g,, g, ...,g, Er wordt verondersteld dat het attribuut minstens ordinaal ~neetbaaris zodat
waarbij < staat voor de ordinale relatie tussen de maten. De relatie betekenen dat
X,., < X , kan
- X ,een grotere lengte heeft dan X,., - X ,meer eenheden bevat dan X,., - X ,van
betere kwaliteit is danx,.,...
Voorlopig wordt verondersteld dat het narltnl te selecteren standaardmaten rz gekend is en de waarden van de standaardlnaten S,, S , ..., S,, niet gekend zijn. Indien bijvoorbeeld enkel substitutiekosten in rekening worden gebracht voor de bepaling van de S, i = l...n dan is een eerste modelformulering mogelijkvia geheeltallige lineaire programmering. Volgende notatie wordt hierbij gebruikt: (X,, s,) = de substitutiekost indien een gevraagde eenheid van maat X, door standaardmaat S, wordt voldaan. Deze kost kan zeer groot gesteld worden indien standaardmaat S, niet kan gebruikt worden voor de vraag naar maat X, (i = l...8j = l...12). Indien de veronderstelling wordt gemaakt dat de keuze van de rz standaardmaten beperkt wordt tot een keuze uit de gevraagde maten, dan volstaan volgende variabelen:
p,:
binaire variabelen: = 1 indien X, opgenomen wordt als standaardmaat = 0 indien niet a,,: binaire variabelen: = 1 indien de vraag g, voldaan wordt via lnaat sj = 0 indien niet
Het geheeltallig lineair programmeringsprobleem ziet er dan uit als volgt:
72
met
Ca,,=l ~ = I , z . . w
(2)
De doelfunctie (1) minimaliseert de totale substitutiekost waarbij cp(.xi,.xj)gi de kost is verbonden aan het voldoen van de vraag naar maat xi via maat .xr De gelijkheden in (2) drukken uit dat aan elke vraag moet voldaan worden en de ongelijkheden in (3) geven aan dat de vraag naar maatx, slechts via maat .xj kan voldaan worden indienx, als standaardmaat wordt gekozen. Tenslotte drukt ongelijkheid (4) uit dat maximaal n verschillende standaardmaten kunnen geselecteerd worden. Merk op dat bij deze formulering de ordinale relatie tussen de maten niet nodig is. Bij verdere formuleringen zal deze onderstelling echter we1 nodig zijn. Het bovenstaand model is hetp-rnedinnrzprobleern dat in de literatuur terug te vinden is in een hele waaier van toepassingen. De formulering is de zogenaamde sterke vorm van hetp-mediaan probleem. De beperkingen (3) kunnen vervangen worden door de equivalente beperkingen
waarin M een groot getal is. De formulering waarin de N' -0ngelijkheden uit (3) worden vervangen door de N ongelijkheden uit (6) wordt de zwakke vorm van hetp-mediaan probleem genoemd.
Welke vorm ook gebruikt wordt, het oplossen van geheeltallige lineaire optimizatieproblemen is tijdrovend en wordt zelfs onmogelijk indien het aantal binaire variabelen te groot is. Door de binaire beperkingen ( 5 ) te vervangen door 0 I a, I 1 en 0 I j3, < 1 wordt een zuiver lineair programmeringsprobleem bekomen, de LP-relaxatie genoemd. Het is gebleken uit experimenten dat de oplossing van de LP-relaxatie van de sterke vorm van hetp-mediaan probleem vaak automatisch binair is, vooral bij toepassingen van het assortimentsprobleem. De LP-relaxatie van de zwakke vorm zal vaak fractionele waarden voor de variabelen opleveren en is dus minder geschikt. Welke vorm ook gebruikt wordt, bovenstaande modellen hebben als nadeel dat het aantal binaire variabelen kwadratisch is in N, nl N2 N. en dus snel toeneemt met de waarde van N.
+
Voorbeeld: stel dat volgende informatie over de vraag gegeven is waarbij het attribuut de lengte van de staven voorstelt:
De substitutiekost is van de vorm y ( x i > x j ) = xj
-2;
= +m
voor
j2i
voor
j
d.w.z. dat een kleinere lengte slechts uit een grotere lengte kan bekomen worden en dat de substitutiekost het verschil in lengte is. De oneindig grote substitutiekosten kunnen in het model best ingebracht worden door de corresponderende variabelen a, uit het model weg te laten. De LP-relaxatie van de sterke vorm geeft voor elke n van 1 tot 10 een oplossing die binair is. D e resultaten worden weergegeven in Tabel 1.
TABEL 1 LP-l.ela~atiei ~ a de ~ l sterke i1orm
Indien de LP-reiaxatie van de zwakke vorm wordt opgeiost met M = 10 en bijvoorbeeld n = 3 wordt als oplossing bekornen: aile aii = 0 behalve a,, = 1, i = l... 10 en
P,
= 0.1, j = l ...10
Het gebruik van een softwarepakket voor geheeltallige iineaire programrnatie, LINDO in dit geval, heeft reeds 343 vertakkingen nodig bij de vertak- en begrensmethode voor het oplossen van de zwakke modellering. De zwakke vorm is daarom voor iets grotere problemen totaal onbruikbaar. Indien de substitutiekost cp (x,x,) nader wordt gespecifieerd, komen eventueel meer efficiente oplossingsprocedures in aanmerking. Een veel voorkornend geval is dat waarbij, als S, < X ,5 S,, de vraag X , voldaan wordt via S, (de onmiddelijke grotere maat, betere kwaliteit, enz.) met substitutiekost cp (x,,~,).In dit geval is een formulering via dynamische programmering zeer efficient.
,
Stel k
(X),.
=
2 cp (x,,xj)g,voor i ij, d.w.z. dat k(x,x,) de substitul=i
tiekost is voor het voldoen van de maten X , ,xi+,,...xj via standaardmaat X,. Noteer met QZ(xi)de minimale substitutiekost voor het voldoen van de vraag naar de maten x,~,,, ...X , indien hiervoor z standaardmaten kunnen gebruikt worden. E r geldt voor elke i = 1...N en z = 2,3 ...11:
waarbij
R(x;) =
{X;,
x ; + I , . . . X N ) en ~ Z ( X N += I )3.
Indien maar 66n standaardmaat mag gekozen wol-den dan moet dit noodzakelijkerwijs X,,zijn zodat voor z = 1 eenvoudig geldt dat: a l ( x i ) = k(xi, x N )
voor elke X;.
(8)
Vervolgens kan (7) gebruikt worden om QZ(x,)te berekenen voor z 2,3...17. D e minimale kost om aan de volledige vraag te voldoen m.b.v. i z standaardmaten wordt dan gegeven door Q,,(x,). Bernerk dat op deze manier zeer snel de minimale substitutiekost kan berekend worden voor verschillende waarden van rz. Het is dan eenvoudig eventuele andere kosten, die veelal afhankelijk zijn van 17, toe te voegen aan @,,(X,)om zo de optimale waarde voor 17 te bepalen. Indien lz niet a priori wordt vastgelegd en elke bijkomende standaardmaat een vaste kost c met zich mee brengt, dan is eel1 efficientere formulering mogelijk. Stel @(X,)de minimale substitutie- en vaste kost voor het voldoen van de vraag naar de matenx,~,,,...X,. Dan geldt voor elke i = 1,2...,N - 1: =
met
@(XN+~)
=
0 terwijl voor i = :"V geldt
Alle @(X,) kunnen vervolgens recursief berekend worden via i=N-1,N-2...2,l. De eerste toepassingell van dynamische programmering voor het deterministisch eendimensioneel probleem gaan ver terug, zie bijvoorbeeld Sadowski (1959), Frank (1965) en Wolfson (1965).
Vool-beeld: De vraag naar de afmetingen (x,,x ,... X,,) = (1,2...40) wordt lukraak gegenereerd tussen 0 en 100. Deze vraag wordt weergegeven in Figuur 1.
FIGUUR 1 Vrncrg ~znarde rnilfeil 1...40
Tabel2 geeft de resultaten voor verschillende waarden van 11 en de 2 volgende substitutiekostfuncties: voor j
2i
voor j
p2(xi.xj) = (xj - X;)
voor j
>i
=
voor j
~ ~ ( x i > x j=) xj =
- X ;
+m
+m
De resultaten werden bekomen via een pascal-programma dat in quasi geen tijd de oplossing berekent. In tegenstelling tot d e p -mediaan formulering kunnen met deze benadering grote problemen opgelost worden. Zoals hoger aangegeveil werd, is er wel een beperking op de aard van de substitutiekost.
B. Continue vmng D e discrete vraag uit vorige paragraaf wordt hier vervangen door een continue vraag tussen twee grenzenx, enx,,. Een functie f (X)met f (X) P 0 en f (x)dx = 1 wordt ondersteld gekend te zijn. De proportie
5:.
van de totale vraag gelegen tussen afmetingen X, en X, met X, > X, is f(x)dx. Deze continue vraag heeft slechts zin indan gelijk aan dien het attribuut kan gemeten worden op een intervalschaal. Analoog als in het geval van de discrete vraag wordt ook hier gezocht naar een aantal standaardmaten S, I S, I ... I S,, zodanig dat de kosten ge~ninimaliseerdworden (eventueel opbrengsten gemaximaliseerd). Hndien de volledige vraag moet voldaan worden, moet S,, = X,,.Dit houdt in dat een aantal welgekende functies zoals de normale, gamma en lognor~naledichtheidsfuncties niet direkt bruikbaar zijn voor f (X)ver~nitsze geen eindige bovengrens hebben. Een nuttige verdeling is bijvoorbeeld de beta-verdeling die veel verschillende vormen tussen willekeurige grenzen X, en X,, kan aannelnen naargelang de keuze van de parameters. Hoewel meer uitvoerige ~nodellenkunnen beschouwd worden, zal hier enkel de substitutiekost geminimaliseerd worden. Veronderstel dat een vraag naar maat x met S,_, < x 5 S, voldaan wordt door S, met een substitutiekost van de volgende vorm:
De gemiddelde substitutiekost per gevraagde eenheid is dan
,
waarin S, = X, en S,, = X,,.De standaardmaten s,,s ,... S,, moeten zodanig gekozen worden dat GSK minimaal is. Wiskundig leidt dit tot een continu optimizatieprobleem van de vorm
met
S,
= L,
Een mogelijke benadering om dit probleem op te lossen bestaat erin de beperkingen (14) te negeren en de afgeleiden van (12) m.b.t. de variabelen s,,s, ...S,,., gelijk te stellen aan nul. Dit leidt tot een stelsel van n - l (niet-lineaire) vergelijkingen met n - l onbekenden:
Merk op dat a wegvalt uit dit stelsel. Indien vereenvoudigend wordt verondersteld dat 6 = 1(lineare substitutiekost) dan herleidt (15) zich tot
Het oplossen van stelsel (15) of (16) via iteratieve methoden kan twee moeilijkheden opleveren: - er is niet voldaan aan beperkingen (14) - de iteratieve methode convergeert niet of convergeert naar eel1 lokaal minimum of zadelpunt van de functie (12) Beide mogelijkheden worden via een aantal voorbeelden onderzocht en hieronder gerapporteerd.
Voorbeeld : Beschouw eel1 beta-verdeling over het interval (0,1), gekarakteriseerd door 2 parameters n en b . Voor alle resultatell hieronder werd a = 1 veroildersteld en werd gebruik gemaakt van het software pakket Mathenlntica om het stelsel van niet-lineaire vergelijkingen op te lossen. lVergens trade11 coilvergentieprobleinen op of problemen inet lokale minima of zadelpunten. 1. Symmetrische beta-verdeling Bijvoorbeeld (a,b) = (2,2), zie Figuur 2. FIGUUR 2 Sera (2,2)-rlzc!~r!zerd
FIGUUR 3 GSK
Voor het geval 17 = 2 moet enkel S, bepaald wordell daar S , = X,,= 1. Figuur 3 geeft de gemiddelde substitutiekost GSK in functie van S , voor n = 2. Hoewel de fuilctie niet convex is, zijn er toch niet echt problemen te venvachten bij het berekenen van het (globaal) mininlum van GSK. In Tabel 3 worden de resultaten gegeveil voor n = 2 en = 5. 2. Rechts-scheve beta-verdeling Bijvoorbeeld (a,b) = (2.5), zie Figuur 4. Figuur 5 geeft ook hies de gemiddelde substitutiekost GSK in functie van S, voor n = 2. Ook hiervoor zijn de optimale standaardmaten weergegeven in Tabel3.
FIGUUR 5 GSK
FIGUUR 4 Brtn (25)-cllclltl1erd
3. Extreem rechts-scheve beta-verdeling Bijvoorbeeld (a,D) = (f,2), zie Figuur 6 voor de dichtheidsfunctie, Figuur 7 voor GSK in functie van S, voor n = 2 en Tabel3 voor de resultaten. FIGUUF. 7 GSK
FIGUUR 6 betnj1/2,2)-clrcl~tllezd
TABEL 3 Beta-i~elzlehi~gei~
(a,b)
n
(2;2) 2 5 (2,s) 2 5 (2) 2 5
GSIi'
assortiment
0.240 0.094 0.252 0.082 0.282 0.087
0.578; l 0.297; 0.467; 0.626; 0.793; 1 0.425; 1 0.182; 0.305; 0.441; 0.622; 1 0.283; l 0.063; 0.195; 0.379; 0.626; 1
Indien toch een verdeling f (X) gebruikt wordt metx,, = dan zijil twee benaderingen mogelijk: - men stelt S,, gelijk aan een I-edelijkeeindige waarde en verwaarloost de vraag x > S,, . In het geval van een normale verdeling voor x met gemiddelde p en standaarddeviatie o zou dit p 3 0 kunnen zijn, - naast substitutiekosten wordt een kost q(s) in rekening gebracht voor het niet voldoen van de vraag .u > S,,. In dat geval wordt ook S,, een variabele die moet bepaald worden door het model. D e eerste beiladering heeft het nadeel dat de waarde voor S,, vrij arbitrair moet worden vastgelegd tenvijl alle andere standaardmaten die bepaald worden via optimizatie zullen afhangen van de gekozen S,,. Indien de kost q(x) kan bepaald worden, lijkt de tweede benadering te verkiezen. D e te minimaliseren functie (11) wordt dan vervangen door +W
+
Als niet-linear stelsel wordt dan bekomen:
(18)
voor i = 1,2,..A-1: de vergelijkingen (15)
(19)
Sn
a S
voor i = n :
(S,
- X)&-'
f
(X)
dx
-
q ( x ) f(S,) = Q
(20)
l n - 1
Voor de lineaire substitutiekost wordt dit:
(21)
voor i = 1,2,...12-1: de vergelijkingen (16)
(22) (23)
voor i = n : a [F(s,)
-
F(s,-l)]
-
q(s,)
f
(sn)=
Q
Voor-beeld: Als voorbeeld van eel1 verdeling met onbegrensde X,, wordt voor f (X) de normale dichtheid gebruikt met p = 10 en o = 2. Neem volgende waarden voor de parameters: or = 1 , 6 = 1 en q(s) = q, een vaste kost per eenheid ingeval niet voldaan wordt aan d e vraag. Vergelijkingen (16) en (23) geven nu de nodige optimaliteitsvoorwaarden. Zowel het GINO-pakket als Mathenzntica werden gebruikt om deze vergelijkingen op te lossen. G I N 0 had voor geen enkel voorbeeld convergentieproblemen indien als startwaarde voor elke si de venvachte waarde van X werd gebruikt. Bij het gebruik van Mnthe-
mnticn daarentegen divergeerden de waarden voor
S , in een beperkt aantal gevallen; door verschillende startwaarden te gebruiken werd ook hier telkens het minimum gevonden. Tabel4 geeft de optimale oplossing voor 11 = 2 en 12 = 5 en voor diverse waarden voor q.
TABEL 4 N o i n ~ n a li.eldeelde lJi.nag
F opt. assortiment jn = 2)
opt. assortimerlt (n = 5)
1 5 10
7.755 ; 8.987 10.111 ; 12.737 10.542 ; 13.698
20 100 500
10.845 ; 14.482 11.302 ; 15.903 11.594 ; 17.015
7.551 ; 8.722 ; 9.651 : 10.513 ; 11 397 8.301 ; 9.723 ; 10.975 ; 12.341 ; 14.251 8.420 ; 9.892 ; l 1 . 2 i 5 ; 12.721 ; 15.062 8.500 : 10.005 ; 11.381 ; 12.998 ; 15.741 8.612 ; 10.168 ; 11.624 ; 13.425 ; 17.007 8.680 ; 10.264 ; 11.772 ; 13.661 ; 18.022
Hieruit kan geconcludeerd worden dat het eendimensioneel deterministisch assortimentsprobleem, zowel discreet als continu, zeer efficient oplosbaar is met bestaande algoritmen en software. Uitbreiding naar het stochastisch geval geeft echter heel wat moeilijkheden. Bovendien heeft een stochastische vraag eerder zin bij een meerperioden probleem. Een stochastische vraag impliceert immers automatisch voorraadvorming enlof tekorten en deze elementen kunnen slechts in het model opgenomen worden indien dit meerdere perioden bestrijkt. Pentico (1974) formuleert toch het 66nperiode probleem met stochastische vraag maar heeft bovendien nog een aantal onrealistische veronderstellingen nodig om het probleem te kunnen oplossen. IPI. MET DETERMIWISTISCH TWEEDIMEWSIOWEEL PROBLEEM. Bij het tweedimensioneel assortimentsprobleem wordt de keuze van de standaardmaten bepaald door twee attributen diex eny zullen genoemd worden. Geconfronteerd met een vraag naar eel1 produkt, discreet of continu inx eny, worden de standaardmaten (s,,t,), (sat,) ...(s,,,t,,) gezocht waaruit de vraag (gedeeltelijk) kan worden voldaan en waar-
bij de kosten, voornamelijk substitutiekosten, worden geminimaliseerd. Een ernstige moeilijkheid voor het ontwikkelen van eenvoudige oplossingsprocedures is een gevolg van het niet o~dirzanlkunnen rangschikken van d e gevraagde maten. Hierdoor is het niet meer mogelijk de vraag naar een eerder gerangschikt produkt te voldoen via een later gerangschikt produkt. Beschouw bijvoorbeeld rechthoekige platen met attributen lengte en br-eedtewaarbij de vraag naar een plaat met (lengte, breedte) = (1,b) kan voldaan worden uit standaardplaten met afmetingen (l,,b,) indien de voorwaarden l 51, en b I b, vervuld zijn. Voor platen met afmetingen (6,1), (4,3), ( 3 , 2 ) is een rangschikking mogelijk tussen (4,3) en (3,2) maar niet tussen (6,1) en (4,3) of ( 6 1 ) en (3,2). A. Discrete iJraag Van een produkt met attributen x en y worden de inaten (x,,y,), (x2,y2)... (xMyN)gevraagd in aantalleng,,g ,...g,. Een belangrijke vraag is of de keuze van de standaardmaten ook hier moet beperkt blijven tot een keuze uit de gevraagde maten (x,,y,), i = 1...N. O m het probleem duidelijk te stellen, geeft Figuur 8 een grafische voorstelling van vier gevraagde maten waarbij 2 standaardmaten moeten bepaald worden. FIGUUR 8 Thagm a t
matell
(.x,y)
O m vraag (x,y) te voldoen uit standaardmaat (s,t) moetx I s eny 5 t. Duidelijk is dan dat 66n van beide standaardmaten (x,y,) zal zijn. Voor de tweede standaardinaat kan men zich beperken tot een keuze uit de drie restereilde gevraagde maten maar het is wellicht nuttig bijkomende maten te voorzien zoals bijvoorbeeld aangegeven wordt in Figuur 9. Standaardinaat (x,y,) kan de vraag naar (x,,y,) en (x,y,) voldoen terwijl (X,,&) bijkomend ook d e gevraagde eenheden van maat (x2,y2)kan voldoen. Deze bijkomende maten zijn nuttig in het geval de attributen bijvoorbeeld lengte en breedte van rechthoeken zijn en de substitutiekost functie is van het verschil in oppervlakte tussen standaardmaat en gevraagde maat. Voor een meer uitvoerige behandeling van dit probleem en een praktische toepassing wordt verwezen naar Gochet en Vandebroek (1989). FIGUUR 9 U~tgcbreldnallbod siarldanrdmatcn
4
Voor de eenvoud van notatie wordt een gevraagde maat (x,y,) kort genoteerd als v, ieV = (1,2...N ] terwijl W,, j& = (1,2...T ] ,de verzameling is van maten waaruit de standaardmaten kunnen gekozen worden. D e substitutiekost kan nu, zoals in het eendimensioneel geval, voorgesteld worden door cp(v,w,). Een geheeltallig lineair programina, volledig analoog met het eendimensioneel geval, ziet er dan uit als volgt:
met
aij =
l
iEV
j=1 Qij
5 ,P,
i€Ji,j€ E
;€v a;,, pj binair In het geval rnaat v, niet kan voldaan worden door standaardmaat kan cp(v,,w,) zeer groot gekozen worden of, beter nog, de variabele q, niet opgerlomen worden in het model. Alie 'vedcnkinge~~ die bij het eendimensioneel formulering geuit werden, zijn ook hier van toepassing. Zeker geldt ook hier dat dit model slechts bruikbaar is voor problemen met een beperkt aantal verschillende maten in de verzamelingen V en E. Dit model wordt geillustreerd aan de hand van 2 voorbeelden. In Figuur 10 worden 25 maten voorgesteld die gegenereerd werden met lengte tussen 10 en 100 en breedte tussen 10 en 80. De getallen onder de rechthoekjes venvijzen naar de vraagg, tenvijl de rechthoekjes zonder getal de bijkomende maten zijn die de verzameling E uitmalten. E bevat hier 85 maten waaruit de standaardlnaten kunnen gekozen worden. Figuur 11is analoog maar hier werd er een sterke positieve correlatie opgelegd tussen de lengte en de breedte van een gevraagde maat. Ook in dit geval is N = 25 maar de verzameling E bevat hier slechts 37 verschillende maten. Figuren 12 en 13 geven dezelfde rechthoeken als de Figuren 10 en 11 maar nu werd aan de rechthoeken eenvolgnummer toegekend om de resultatell op een eenvoudige manier te kunnen rapporteren. W,
FIGUUR 10 Mogelzlke staaizdanidrnnteiz, N = 25, T = 85 --
p p
p -
FIGUUR 11 ~Mogelijkestai1dnal.drnntei7; IV =25, T = 37, s t e ~ k positieve e covelatie
_
0
10
l
E
20
311
40
50
60
70
80
90
LOO
I10
FIGUUR 12 1\40gelzjke stai~daarzlmate?~. N = 25.T = S 5
FIGUUR 13 hfogelijke staizdaardrnate?~;N = 25,T = 37, sterke positiei~ec o i x l a t i e
aBbel5 geeft de resultaten voor het voorbeeld met 85 maten waarbij het aantal toegelaten standaardmaten varieert van n = 1 tot n = 10. Telkens werd de sterke vorm van de LP-relaxatie opgelost en enkel voor 11 = 9 was de LP-oplossing fractioneel. E6n vertakking in een vertak-en begrensmethode volstond om de geheeltallige oplossing te vinden. O p dezelfde wijze worden de resultatell van het probleem met gecorreleerde attributen weergegeven in Tabel 6. Alle LP-relaxaties gaven geheeltallige oplossingen in dit voorbeeld. TABEL 5 LP-oploa~lrzg,N = 25. T = S5
n
1 2 3 4 5 6 7 8 9 10
substitutiekost LP-relaxatie geheeltallig - - - - ' 7822313 in'~asl3 3318782 3318782 2256803 2256803 1511314 1511314 1123164 1123164 855632 855632 663251 663251 502053 502053 382099.7 382292 286776 286776
optimaal assortiment 85 75 ; 76 41 ; 7 5 : 81 30 ; 49 ; 75 ; 76 30 ; 49 ; 59 ; 75 ; 76 13 ; 41 ; 49 ; 59 ; 75 ; 76 13 ; 41 ; 47 ; 49 ; 62 ; 72 ; 76 7 ; 20 ; 41 ; 47 ; 49 ; 62 ; 72 ; 76 7 ; 20 ; 35 ; 41 ; 49 ; 59 ; 62 ; 72 ; 76 7 ; 13 ; 30 ; 35 ; 36 ; 49 ; 59 ; 62 ; 72 ; 76
TABEL 6 LP-oplossirzg, N = 25, T = 37
n
substitutiekost LP-relaxatie = geheeltallig
1 2 3 4 5 6 7 8 9 10
6553329 2289489 1299817 891577 615013 503821 400891 306331 237115 182921
optimaal assortiment
37 18 ; 37 18 ; 31 ; 37 4 ; 18 ; 31 ; 37 4 ; 18 ; 20 ; 31 ; 37 4 ; 18 ; 20 ; 31 ; 33 ; 37 4 ; 14 ; 18 ; 20 ; 31 ; 33 ; 37 4 ; 14 ; 18 ; 20 ; 27 ; 31 ; 33 ; 37 4 ; 14 ; 18 ; 20 ; 27 ; 31 ; 33 ; 34 ; 37 4 ; 7 ; 14 ; 18 ; 20 ; 27 ; 31 ; 33 ; 34 ;37
Meer efficiente formuleringen gebaseerd op dynamische programmering zoals voor het Undimensioneel geval zijn niet voor de hand liggend. Dit is een gevolg van het hoger aangehaalde probleem dat de maten niet ordinaal te rangschikken zijn. In sommige gevallen mag verondersteld worden dat de optimale keuze van de standaardrnaten geordelzd zal zijn. Hiermee wordt een verzameling standaardmaten W , = ( s I , t I ) , w 2 = ( s a t 2 ) ,...W,, = ( s , , t , , ) bedoeld waarvoor geldt dat S , I S , 5 S,..< S,, en f, I t:, I t3..I t,,. Een dergelijke verzameling maten wordt geillustreerd in Figuur 14. Hoewel men nooit a priori zeker kan zijn dat een optimale keuze van standaardmaten geordend zal zijn, is deze veronderstelling redelijk indien aan volgende twee vooiwaarden voldaan is - de correlatie tussen beide attributen is sterk positief - het aanta! te k i e z e ~ standaardmaten ~ M in (zeer) klein in vergelijking met het aantal gevraagde maten N.
FIGUUR 14 Eel1 geordeide verznnzebng stnizdaardrnatei~
Voor een toepassing waarbij aan deze 2 voonvaarden voldaan is, wordt venvezen naar Diegel e n Bocker (1984). Het voordeel van geordende standaardmaten is dat opnieuw een efficiente formulering op basis van dynamische programmering mogelijk wordt. Deze formulering gebruikt de uitgebreide verzameling
E = { w , , w ~ . . vI vinet gi= 0 indien wi eel1 inaat is waarvoor er geen vraag is, zie bijvoorbeeld ( x , y , ) en (x,y,) in Figuur 9 . E r wordt verondersteld dat E gerangschikt is zodanig dat S , I s2 . . <...S, en ti5 ti+, indlen si = S,,,. Toegepast op de rechthoeken uit Figuur 9 geeft dit E = {wl = ( 2 1 , ~ 1 ) , w a ( 2 2 , ~ 2 ) , = ~ 3(23,313)) W 4 = ( 2 3 , Y I ) , 20.5 = (23, ~ 2 ) w6 , = (23, ~ 4 ) ) Eaat verder
R(rr!;) = { m , E E
: S,
<
S?
en tj 5 t ; ) en S1(w;, W,) = O(wi)\R(w,).
In Figuur 9 is Q(w,) = (w,,w,w,l en R(w,,w3) = /w,,w,J. Stel tenslotte @,(W,): de miniinale kost om de vraag te voldoen van alle eenheden in Q(w,) indien z standaardmaten worden toegestaan. Indien enkel substitutiekosten worden beschouwd, dan geldt:
Deze recursieforinule wordt berekend in de volgorde
waarbii
Indien de waarde van II niet a priori wordt bepaald en bijvoorbeeld een vaste kost per bijkomende standaardinaat wordt opgelegd, dan is @(wi):de minimale kost om de vraag te voldoen van alle eenheden in R(w,) :
De berekeningen gebeuren in de volgorde w , , w , ...W,. Bovenstaande voorbeelden met 85, respectievelijk 37, maten in de uitgebreide verzameling werden opgelost onder de beperking van een geordende keuze van standaardmaten. De resultatell zijn te vinden
in Tabel7 en Tabel S; deze resultaten werden bekomen m.b.v. een pascal-progsamma. De kolom % \,er-schil geeft het procentueel verschil t.0.v. de optimale oplossing die in Tabel 5 en Tabel 6 werd weergegeven. Zoals intuitief kan venvacht worden, neemt over het algemeen het procentueel verschil toe met toenelnende waarden van 11: voor (zeer) kleine verhoudingen II/Nzal de beperking van een geordende keuze van de standaardmaten procentueel geen a1 te grote afwijkingen geven van een optimale keuze zo~lderbeperkingen. Dit is nog meer het geval indien de attributen positief gecorreleerd zijn.
TABEL 7 C-eorrle~z~!e standc~ardmcte!z,N
/
n
/
Lost
1
%verschil
/
= 25,
T = 85
geordende standaardmaten
D e resultaten voor de gegevens met sterk gecorreleerde lengten en breedten volgen dan in Tabel S. Merk op dat voor 11 = 1,2...S toch de optimale oplossing bekomen werd ondanks de beperking van geordende standaardmaten aangezien ook de optimale oplossing geordend was. Indien de veronderstelling van een geordende verzameling standaardmaten niet verantwoord is en het probleem te groot is voor de lineaire programmeringsbenadering, dan is een heuristische methode wenselijk. Voorbeelden hiervan zijn te vinden in Page (1975) en Gochet en Vandebroek (19891, waar heuristieken worden voorgesteld die gebaseerd zijn op dynamische programmering.
Indien de standaardmaten kunnen versneden worden in verschillende kleinere maten moet het assortimentsprobleem gecombineerd worden met het versnijdingsprobleem. Hierover zijn volgende publikaties verschenen: Chambers en Dyson (1976), Diegel en Bocker (1984), Beasley (1985), Yanasse et al. (1991), Agrawal (1993) en Vasko en Wolf (1994). De hierin voorgestelde benaderingen zijn enkel bruikbaar vsor kleinere problemen met uitzondering van de benadering voorgesteld in laatstvernoemde publikatie. TABEL 8 Geol-derzde stnndaal-dmaten, iV = 25, T = 37
kost
I %verschil I
geordende standaardmaten
6553329 2289489 1299817 891577 615013 503821 400891 306331 252137 212825
Een continue vraag over twee attributen vereist een functie f (x,y) met
f (x,y) 2 0 en
Beide attributen X en y kunnen op een intervalschaal worden gemeten waarbij attribuut X in waarde kan varieren tussenx, enx,,, attribuuty tusseny, eny,,. De proportie van de totale vraag inetx, <X, < X <X, <X,, eny,
E:1:
passingen zal de vraag dusdanig zijn dat de attributenx eny gecorre-
I
leerd zijn. Bivariaat normale, lognormale en beta-verdelingen kunnen hierbij gebruikt worden. De grote moeilijkheid bij deze problemen ligt echter in het uitschrijven van de gemiddelde substitutiekost per gevraagde eenheid, althans indien geen veronderstelling wordt gemaakt omtrent de aard van de optimale standaardmaten. Om dit te illustreren wordt in Figuur 15 een vraag ondersteld met X , 5 X 5 X,, en y, 5 y 5 y,, zonder verdere specificatie van f (x,y). Voor de vier standaardmaten (s,,t,), (sat2), (s,t,), (s,t,) = (x,,,y,,)is het vrij ingewikkeld om de dubbele integralen van de substitutiekost neer te schrijven en het is duidelijk dat dit quasi onmogelijk wordt indien a priori niets geweten is over de ligging van de standaardmaten. FIGUUR 15 Continue vrnng nzet vier stai~daardmnten
Indien bijkomende voorwaarden worden opgelegd, bijvoorbeeld een geordende keuze, dan wordt continue optimizatie we1 mogelijk. Veronderstel dat de geordende maten (s,,t,), (s,t,) ... (S,,t,,) worden gezocht met substitutiekosten van de vorm X
y )(
S
t )
voor
of
< X < S ; en y 5 t t;-l < y 5 t; en X 5 S ;
S;-l
(33) (34)
In dit geval kan de gemiddelde substitutiekost per eenheid geschreven worden als
met S , = X , , S , = x,,t, = yl en t , = y,,. Het minimaliseren van GSK is in dit geval uiteraard complexer dan in het kendimensioneel geval en specifieke software zal vermoedelijk vereist zijn voor het efficient oplossen van dit probleem. Wellicht is het aangewezen, zeker als de veronderstelling van geordende standaardmaten niet redelijk is, om het continue probleem discreet te benaderen en de methoden uit vorige paragraaf toe te passen.
IV. BESLUIT De ken- en tweedimensionele discrete assortimentsproblemen kunnen optimaal opgelost worden via geheeltallige lineaire programmering indien het aantal beschouwde maten niet te groot is. Grote problemen kunnen meestal efficient opgelost worden via dynamische programmering. Hiervoor moet de substitutiekost we1 aan bepaalde voorwaarden voldoen en moeten, voor het tweedimensionele probleem, bijkomende veronderstellingen worden gemaakt over de gewenste politiek. Z o werd er aangetoond hoe de optimale verzameling van geordende standaardmaten eenvoudig kan bepaald worden m.b.v. dynamische programmering. Indien de vraag continu is, moet een stelsel van niet-lineaire vergelijkingen worden opgelost om de optimale standaardmaten te bepalen. Mier is het mogelijk dat suboptimale oplossingen worden bekomen doordat meerdere oplossingen van het stelsel mogelijk zijn. Vooral bij deze problemen zal de software die voorhanden is om dergelijke stelsels op te lossen grenzen opleggen aan de maximale grootte van het probleem.
REFERENTIES Agrawal, P,, 1993, Determining Stock-Sheet-Sizes to Minimize Trim Loss, E~lropeallJournal of Operational Research 64, 423-431. Beasley, J., 1985, An algorithm for the Two-Dimensional Assortment Problem, E~cl.opean Joilnlal of Operational Research 19, 253-261. Chambers, M. and Dyson, R., 1976, The Cutting Stock Problem in the Flat Glass Industry - Selection of Stock Sizes, Operational Research Qciarterly 27, 949-957. Diegel, A . and Bocker, H., 1984, Optimal Dimensions of Virgin Stock in Cutting Glass to Order, Decisior~Sciences 15, 260-274. Frank, C.,1965, A Note on the Assortment Problem, Managenlent Science 11, 724-726. Gochet, W. and Vandebroek, M,, 1989, A Dynamic Programming Based Heuristic for Industrial Buying of Cardboard, European Journal of Operatiorzal Research 38, 104-112. Hinxman, A., 1980, The Trim-Loss and Assortment Problem: a Survey, European Jo~trnnl of Operariolzal Research 5, 8-118. Page, E., 1975, A Note on a Two-Dimensional Dynamic Programming Problem, Operario1101 Research Quarterly 26, 321-324. Pentico, D.W., 1974, The Assortment Problem with Probabilistic Demand, Manngenzent Science 21, 286-290. Peniico, D.W, 1988, The Eiscrete Two-Eimensioiial Assortnlcllt Problem, (?perations Kesearch 36, 324-332. Sadowski, W., 1959, A Few Remarks on the Assortment Problem, Manngenzent Science 6, 13-24. Vasko, Eand Wolf, F., 1994, A Practical Approach for Determining Rectangular Stock Sizes, Journnl of the Operational Research Society 45, 281-286. Wolfson, M,, 1965, Selecting the Best Lengths to Stock, Operations Research 13, 570-585. Yanasse, H., Zinober, A. and Harris, R., 1991, Two-Dimensional Cutting Stock with Multiple Stock Sizes, Joirrlzal of the Operational Researcl1 Sociey 42, 673-683.
Tijdschrift voor Economie en Management Vol. XXXIX, 4,1994
Recent Developments in Integer Linear Programming Solve Managerial Decision Problems
P. INTRODUCTION There is no better illustration of cross fertilization between academic research and real-life applications than the explosion of the use of mathematical programming for solving managerial decision problems we have been observing in the last decade. Faced with increased global competition and maturing markets, managers previously happy with a "good" answer to their problems, are loosing their competitive edge and are pressured continuously to find the "best" solution. As management objectives and working environment become better outlined, they want an optimal strategy. Optimization means maximizing (or minimizing) an objective subject to constraints describing the decision environment. Researchers and practitioners alike have been extremely excited about the new developments in both the underlying mathematical theory and in computer technology (hardware and software), that have made it possible to particularly solve large-scale reallife integer linear programming problems. The decision variables of a problem answer the managerial question: "What strategy to implement?". They indicate what decisions have to be taken. In contrast to linear programming (LP) where the decision variables can take any nonnegative value, the distinguishing feature of discrete, combinatorial, or integer optimization is that some of the decision variables of a problem are required to belong to a discrete :'
Department of Applied Economics, K.U.Leuven
set, typically a subset of integers. The integrality enforcing capability is perhaps more powerful than the reader at first realizes. A frequent use of an integer variable in a model is as a zerolone variable to represent a golno-go decision. In all industries, we see emerging islands of optimization. An important and widespread area of applications concerns the management and efficient allocation of scarce resources to increase productivity. These applications include operational problems such as the distribution of goods, production scheduling and machine sequencing. Airplanes fly around the world according to tight schedules that are derived with complex integer programming based algorithms. In the paper industry, advanced model generation techniques are being used to optimally cut large rolls of paper in order to satisfy customer demand (Degraeve (1992)). Other applications include planning problems such as capital budgeting, facility location and portfolio selection, and design problems such as telecommunication and transportation network design, VLSI circuit design and the design of automated production systems. Financial Institutions in Wall Street, use linear and integer programs to manage client portfolios, pension plans and do computer trading. In the light of uncertainties in demand, General Motors has used stochastic integer programming for plant openings and closings and for capacity planning (Eppen, Martin and Schrage (1989)). Discrete optimization problems also arise in cryptography for the design of unbreakable codes and in politics for the selection of fair election districts. The purpose of this article is to motivate why such a tremendous surge in applications of integer programming has come about. We will illustrate theoretical research findings related to model formulation in section 11. In section 111, we discuss recent developments in optimization software, namely the emergence of structured modeling languages. Conclusions and areas for future research are given in section IV. 11. SOME ILLUSTRATIONS O F THEORETICAL DEVELOPMENTS IN MODEL FORMULATION Historically, introductory texts of integer and linear programming (ILP) have concentrated on presenting the mechanics of solving ILPs. The solution of ILPs is, however, purely mechanical and is therefore best relegated to the computer. The scarce commodity is the skill of
identifying applications of ILP and then formulating an "appropriately good" ILP model. It is that skill that will be the main focus of our teaching activities in the years to come. To understand why we have to develop an "appropriately good" ILP model, we have to be aware of how a computer solves integer programs. This will be illustrated in section I1.A. 1n sections II.B, lI.C and II.D, we illustrate how the use of cutting planes and variable redefinition respectively can help us formulate "good" ILP models. A. The Branch and Bo~lndMethod for Solvilzg Integer Programs In all commercially available computer programs, integer programming problems are solved by solving a sequence of linear programs in a process called branch and bound. The linear programming based branch and bound (B&Mj approach is currently the most efficient general-purpose method of solving ILPs. abday, linear programs - with continuous nonnegative variables only, - are easy to tackle by a wellknown solution algorithm, called the simplex method. It is the additional integrality condition on the variables of integer programs that make solving them hard. We will illustrate the B&B technique using a simple knapsack example problem which is solely used for illustrative purposes. The data are given in Table 1. Suppose we have a knapsack which can hold a volume of 10 liters (lj. We could carry a sixpack of beer andlor a carafe of wine to the market. The sixpack of beer sells for 10 and takes up 7 1 of space in your knapsack; the carafe of wine sells for 8 and takes up 6 1of space. What should we bring to the market in order the maximize profit? TABLE 1 Data for the Klzapsack Problem
Profit ($) Space Requirement (1)
I
Sixpack of Beer
I
Carafe of Wine
I
10
I
8
7
6
The decision to be made is whether or not to bring the sixpack of beer andlor a carafe of wine to the market. The content of the knapsack describes the working environment. Therefore, we need two zero/ one decision variables BEER and WINE defined as follows: BEER WINE
= 1, if we will carry a sixpack of beer, 0, otherwise, = 1, if
we will carry a carafe of wine, 0, otherwise,
The integer programming model to be submitted to a computer for solution with the branch and bound procedure is then as follows: 10 BEER + 8 WINE MAX SUBJECT TO LhJAPSACR) 7 SEEP, f 6 WINE := END INTE 2
(II.A.l) 10
(11.kL.2) (II.A.3)
In the objective function (II.A.l),we maximize the profit that can be obtained at the market for whatever we will bring. The knapsack constraint (II.A.2) indicates that we cannot exceed the 10 1 capacity of the knapsack. The condition (II.A.3) is required to inform the computer that the two decision variables call only take on the values zero or one. We use the linear optimization program LINDO (Schrage (1993)) to solve the model (II.A.l) - (II.A.3). The LINDO output is as follows: LP OPTIMUM FOUND AT STEP 2 OBJECTIVE VALUE = 14.0000000 OBJECTIVE FUNCTION VALUE 1) 14.000000 VARIABLE VALUE BEER 1.000000 WINE .500000
REDUCED COST
-.666666 .000000
NEW INTEGER SOLUTION O F 10 AT BRANCH 1 PIVOT 3 BOUND ON OPTIMUM: 14 ENUMERATION COMPLETE. BRANCHES=
2 PIVOTS=
LAST INTEGER SOLUTION IS THE BEST FOUND RE-INSTALLING BEST SOLUTION ...
5
OBJECTIVE FUNCTION VALUE 1) 10.000000 VARIABLE VALUE BEER 1.000000 .000000 WINE
REDUCED COST -10.000000 -8.000000
LINDO gives first the LP relaxation solution, the LP optimum. It is the solution to the original model with the integrality conditions replaced with the associated upper and lower bounds on the values of the decision variables. The LP relaxation solution suggests to carry 1 sixpack of beer and .5carafes of wine. As .5carafes of wine does not satisfy the integrality condition, the B&B procedure starts. An integer solution with an objective function value of 10 is found at the first branch after 3 pivots or iterations of the simplex method. The enumeration is complete after 2 branches and 5 pivots. The optimal integer solution has an objective function value of 10 and suggests to carry the sixpack of beer only. The general idea of LINDO for solving integer programs, is to construct a B&B enumeration tree by partitioning, in a process called branching, the set of all feasible solutions to a given problem into smaller and non-overlapping subsets, called nodes. Figure 1 depicts the complete B&B tree for the example problem. Observe that branching is accomplished by adding additional constraints on the values of the decision variables thereby creating the subsets or nodes on the B&B tree. All subsets have different extra constraints. A linear program is then solved over the feasible region of a subset, consisting of all possible solutions to the subset (at each node). In each node, you first find the node number indicating the order in which the node was generated during the search. OV indicates the objective function value and BEER and WINE give the values of the decision variables resulting from the LP solution. The solution at node 0 is the LP relaxation of the integer program. The objective function value (OV) of the LP at each node is a bound (upper (lower) bound for a max (min) problem) on the value of the best possible solution in this subset. At some point of the procedure, a particular subset will have enough additional constraints such that the LP solution satisfies the integrality conditions. This happens at nodes 1and 3 in Figure 1.The best of the integer solutions found, called the incumbent solution, is always stored. The B&B algorithm cleverly allows one to eliminate certain subsets from consideration when
FIGURE 1 Brili~clland boil11d senrciz tree for the Kizapsiick esnnzpleproblern
-
WLNL => i
"EER ==>l
BEER =
INTEGER
INFEASlBLE
the LP bound at a particular node is below (for max problem) or above (for min problem) the current incumbent solution. This does not happen, however, in our example problem where we have in fact a complete enumeration investigating all combinations of the possible values of the decision variables! Observe also that at node 4, the problem is infeasible because given the knapsack constraint (II.A.2) it is impossible to carry both the sixpack of beer and the carafe of wine. The B&B procedure stops when all subsets of the original formulation have been searched or eliminated. The incumbent solution at enumeration completion is then the optimal solution. From the above exposition, we should understand that the key to the success of a "good" B&B implementation is the quality of the bounds found by solving the LP at each of the nodes. Subsets of the original feasible region of a problem can only be eliminated from consideration based on bound criteria. The better the bounds, the more subsets can be eliminated and as such the time the computer spends
in searching is reduced. Clearly, we would like to have our bounds as close as possible to the objective function value of the optimal integer programming objective function value. The difference between the LP relaxation objective value at node 0 (OV(LP) = 14) and the opti' a measumal integer programming objective value (OV(IP) = 10) 1s re for the quality of the bound and consequently the formulation. The quality of a formulation is expressed by a statistic called the "gap". The percentage gap for a max, respectively min problem is defined as follows: GAP,,, =
OV(LJ') - OV(IP)* OV(IP)
GAP,,, =
OV(IP)- OV(LP) * OV(LP)
(14-10)'1'100/ The gap for our knapsack example problem is 10 40%, an extremely bad result. Ideally, we want the gap to be zero, - in which case there is no enumeration necessary, - or at least within a few percentage points, resulting in an LP relaxation OV close to OV(1P). We can conclude that contrary to linear programming where almost any logically correct formulation will do to solve a continuous problem, for integer programming, there is a best formulation, namely this one that results in zero gap. Nowadays, impressive research efforts in integer programming are directed towards finding the best formulation for discrete optimization problems. Simple example problems, as the one above, give us sometimes a wrong perspective on the difficulty of solving integer programs. The idea is often volunteered that we could solve IPs by complete enumeration: just list all the feasible integer points and evaluate the objective function value at each of the points in order to pick the point which optimizes this value. However, complete enumeration is not a reasonable solution procedure. Suppose that we had an IP with 100 variables that could only take the values 0 or 1. In this case there could be The time required to up to 21°0 feasible solutions, which is 1.27'"0~@. enumerate all of these points would exceed a lifetime, even on the fasted CRAY computers. In general, for an LP, the solution time increases approximately proportionally with the number of variables and approximately with the square of the number of constraints. For a given IP problem, the time may in fact decrease as the number of constraints is increased. However, as the number of integer variables is increa-
sed the solution time may increase dramatically. Some small IPs, e.g. 60 constraints and 60 variables are extremely difficult to solve. Just as with LPs, there may be alternate IP formulations of a given problem. With IPs however, the solution time will critically depend upon the formulation. Producing "good" IP formulations requires skill. Below, we will illustrate two techniques that help us formulate good models. For many business decision problems the difference between a good formulation and a poor formulation may be the difference between whether the problem is solvable or not! B. Producirzg Good IP Formulations by Adding G~lttingPlanes IP formulations with small (ideally, zero) gaps are generally achieved by adding additionai "not-so-obvious" constraints to the formuiation either at the outset or during the solution phase. Those additional constraints, called cutting planes or cuts for short, should be redundant for the integer programming problem but they should not be redundant for the LP relaxation, otherwise they would be useless anyhow. We call the process of adding additional constraints to an LP tightening'. Therefore, integer programming formulations that result in small gaps are called tight formulations. A valid cut for the example knapsack problem introduced in section 1I.A is as follows: CUT)
BEER
+ WINE
<=
1
(II.B.l)
The cut simply states that both a sixpack of beer and a carafe of wine will not fit together in the knapsack. It is derived by observing that if we put a sixpack of beer in the knapsack (BEER = l),the remaining capacity of 3 (= 10 - 7) is too small for a carafe of wine, consequently WINE = 0. Likewise, if we put a carafe of wine in the knapsack (WINE = l),the remaining capacity of 4 (= 10 - 6) is too small for a sixpack of beer, resulting in BEER = Q. The complete model to be submitted to the computer is now as follows:
MAX 10 BEER + 8 WINE SUBJECT TO 7 BEER + 6 WINE <= 10 KNAPSACK) CUT) BEER + WINE <= 1 END 2 INTE The solution output given by LINDO is then as follows: 2 LP OPTIMUM FOUND AT STEP OBJECTIVE VALUE = 10.0000000
ENUMERATlON COMPLETE.BRANCHES=
OPIVOTS=
2
LAST INTEGER SOLUTION IS THE BEST FOUND RE-INSTALLING BEST SOLUTION ... OBJECTIVE FUNCTION VALUE 1) 10.000000 VALUE VARIABLE X 1.oooooo Y .OOOOOO NO. ITERATIONS= 2 BRANCHES= 0 DETERM. =
REDUCED COST -10.000000 -8.000000
1.000E
0
As before, LINDO gives first the LP relaxation solution of the problem, the LP optimum. However, observe here that the objective function value of the LP relaxatiorz (OV(LP) = 10) is the same as the optimal integer solution objective function value we found in section 1I.A after the B&B procedure! This implies that for the model with the cut, we do not have to search anymore for an optimal integer solution as we have already found it during the solution of the LP relaxation. It becomes colnpletely unnecessary to perform the B&B procedure. This is indicated by the statement BRANCHES= 0 in the solution report. We save a tremendous amount of computer time. Observe that the gap in this case is 0% illustrating that we have, in fact, found the best formulation for this problem instance.
For a problem with two decision variables, there is an insightful graphical representation of the effect of a cut. Figure 2 illustrates this for the knapsack example. The objective function and constraints are represented by straight lines in the two dimensional space with the variable BEER along the horizontal axis and the variable WINE along the vertical axis. Remark the representation of the knapsack constraint (KNAPSACK), the cutting plane (CUT), the objective line at the LP relaxation optimum (OBJ) and the objective line at the IP optimum (OBJ'". The lines SUB are the respective simple upper bounds of 1 on the values of the decision variables implicit in the integrality conditions. The big dots give the feasible integer points that are the only candidates for solution. The crossed area contains all possible solutions to the LP relaxation of the knapsack problem without the cut. The single numbers between parentheses correspond to the node numbers of the B&B tree in Figure 1.They indicate the order in which the associated points were enumerated during the B&B search. The double crossed area represents that part of the original feasible region that is effectively cut off b y e cutting plane. The difference between the two objective lines (OBJ) and (OBJ*) is then the graphical representation of the gap. We call a family of cuts that results in a zero gap for an integer programming problem facets. In technical terms, facets describe the convex hull of integer solutions to a particular integer programming proFIGURE 2 Grnplz~cnlrepresentntlon of the effect of the cuttlngplnne
blem. For most problem types, one single family of facets may not be sufficient to completely obtain all the integer solutions and several families of different facets may indeed be required.
In order to fully appreciate the power of cutting planes, consider the following single machine sequencing problem. An operator of a drilling machine has six different jobs or tasks waiting to be performed. Each job takes the operator a different amount of time. The processing time of each job is given in Table 2. As his clients are waiting for their products, he would like to please as many clients as possible over a particular time span. Me realizes that he wants to maximize the number of jobs finished over this particular time span (e.g. a time unit) . have shown while lie can work only on oiie job at a t i ~ eResearchers that maximizing the number of jobs finished per unit time is identical to minimizing the sum of start times of the jobs. TABLE 2 Data for the szngle nzachme s e q ~ ~ e ~ z c z ~ l g p ~ o b l e m Job Number
1
2
3
4
5
6
Processing Time
14
9
26
3
6
17
The problem the machine operator is facing is how to select the job to work on next, i.e. determine the work sequence, such that this performance measure is optimized. It is well-known that this is a fairly easy problem. The best the operator can do is to sequence the jobs in the order of increasing processing times, i.e. start to work on the job that requires the shortest amount of time, next do the second shortest job, etc.... This sequencing procedure is known as the shortest processing time rule (Herroelen (1987)). The optimal solution for the example given in Table 2, according to the shortest processing time rule, is given in Table 3. Remark that the total sum of start times for this problem instance is 111.
Optrinal solutron for tlze srngle rnaclzrrze seqziencrngplnblein Processing Sequence Number Job to Be Done Processing Time
1 2 3 4 5 6 Job 4 Job 5 Job 2 Job 1 Job 6 Job 3 14 26 9 17 3 6 0 1 3 1 9 1 1 8 1 3 2 1 4 9 1 0 1 3 1 1 2 1 30 1 6 2 1 1 1 1
/
Cumulative Start Time Sum of Start Times
The single machine sequencing problem can also be solved using mathematical programming. Let us define sj and pj respectively as the start time and the processing time of job j and yij to be 1 if job i is sequenced before job j, 0 otherwise, (i, j = 1, 2, ..., 6; i j). A straightforward integer programming formulation is then as follows: 6
(SMSW) xir,
sj
(II.C.l.)
j=1
subject to Yij + Yjl = 1 S, + pi 5 sj + M"yji sj 2 0, yij E 10, l ]
i = l , ..., 5 ; j = I + l , . . . , 6 i, j = l , ..., 6; j# i I, j = l , ..., 6; j #i
(II.C.2) (II.C.3) (II.C.4)
In formulation SMSW, the objective function (II.C.l) is the sum of the job's start times to be minimized. The sequencing constraints (II.C.2) model the requirement that either job i must be sequenced before job j (y,, = 1, yJ1= 0) or vice versa (y,, = 0, y,, = 1). The start time constraints (II.C.3) describe the relationship between the start time and the sequencing variables. The big M is a fixed large number such as 999. Those constraints indicate that if job i is sequenced before job j (y,, = 1) and consequently y,, = 0, then, the start time of job j should be larger than or equal to the start time of job i plus its processing time p,. In case job j is to be done before job i (y,, = l ) , then the constraint becomes essentially redundant due to the big M. Finally, constraints (II.C.4) enforce the proper nonnegativity and integrality conditions. Solving this model with LINDO for the problem instance introduced in Table 2 gives:
CUMULATIVE CPU TIME IN HR:MIN:SEC =
0: 0:11.04
LP OPTIMUM FOUND AT STEP 30 OBJECTIVE VALUE = 0.000000000E+00 NEW INTEGER SOLUTION OF 117.000000 AT BRANCH 9 PIVOT 350 BOUND ON OPTIMUM: 40.71795 NEW INTEGER SOLUTION OF 114.000000AT BRANCH 10 PIVOT 357 NEW INTEGER SOLUTION OF 111.000000AT BRANCH 77 PIVOT 1752 BOUND ON OPTIMUM: 49.41172 ENUMERATION COMPLETE. BRANCHES = 164 PIVOTS = 3149 LAST INTEGER SOLUTION IS THE BEST FOUND RE-INSTALLING BEST SOLUTION ... CUMULATIVE CPU TIME IN HR:MIN:SEC =
0: 0:26.04
OBJECTIVE FUNCTION VALUE 1) 111.00000 VARIABLE Y56 Y46 Y26 Y16 Y45 Y63 Y53 Y43 Y23 Y13 Y52 Y42 Y51 Y41 Y21 S1 S2 S3 S5 S6
VALUE 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 18.000000 9.000000 49.000000 3.000000 32.000000
REDUCED COST 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
In the computer output above, observe first that the LP relaxation value is 0.After 164 branches in the B&B enumeration tree, the opti-
mum value of 111 is found. Consequently, the gap for this model effectively is il~fil~itelylarge ( m = 11110)! The total computational time for solving this problem instance on a fast 486,66 MHz PC is given by the difference between the cumulative CPU times in the solution report namely 15.00 seconds ( = 0: 0: 26.04 - 0: 0: 11.04). A prohibitive amount of time for such a small problem. This indicates essentially that the model above is useless for solving large real-life single machine sequencing problems. However, the existence of a strong cutting plane for this problem makes mathematical programming a viable technique for machine sequencing. Degraeve (1990) proposes:
This cut enforces a job's start time to be at least the sum of the processing times of the jobs sequenced ahead of it. It can easily be shown that the cuts (II.C.5) completely describe all the possible integer solutions to problem SMSW in such a way that constraints (II.C.3) become redundant yielding the model SMSS below. Moreover, it results also in a much more compact formulation! 6
(SMSS) min
sj j =1
subject to y.. + y.. = 1
i = l, ..., 5 ; j = i + 1, ..., 6
(II.C.7)
Solving model SMSS with EINDO results in the following computer output: CUMULATIVE CPU TIME IN HR:MIN:SEC = 0: 0:11.26 LP OPTIMUM FOUND AT STEP 8 OBJECTIVE VALUE = 111.000000 NEW INTEGER SOLUTION O F 111.000000 AT BKANCH 0 PIVOT 8 BOUND ON OPTIMUM: 111.0000 ENUMERATION COMPLETE. BRANCHES = 0 PIVOTS= 8
LAST INTEGER SOLUTION IS T H E BEST FOUND RE-INSTALLING BEST SOLUTION ... CUMULATIVE CPU TIME IN HR:MIN:SEC = 0: 0:12.32 OBJECTIVE FUNCTION VALUE 1) 111.00000 VARIABLE VALUE REDUCED COST 1.000000 17.000000 Y63 1.000000 6.000000 Y56 1.000000 6.000000 Y53 Y52 1.000000 6.000000 1.000000 6.000000 Y51 1.000000 3.000000 Y46 1.000000 3.000000 Y45 1.000000 3 .000000 Y43 Y42 1.000000 3 .000000 1.000000 3.000000 Y41 Y26 1.000000 9.000000 1.000000 9.000000 Y23 1.000000 9.000000 Y21 1.000000 14.000000 Y16 1.000000 14.000000 Y13 18.000000 0.000000 S1 9.000000 0.000000 S2 49.000000 0.000000 S3 3.000000 0.000000 S5 32.000000 0.000000 S6
In this LINDO output, observe that the LP relaxation value is 111. Indeed the optimal objective function value of the integer solution. Model SMSS results in a gap of 0%! Degraeve (1990) proves that this is always the case. Remark also that this formulation for the problem takes about one second of CPU time to solve ( = 0: 0: 12.32 - 0: 0: 11.06) on the same PC. It is the difference in computation times between model SMSW (about 1.5 s) and model SMSS (about 1 s) for the same problem instance that makes mathematical programming with model SMSS become a practically useful technique for solving large single machine sequencing problems. Moreover, this difference in computation times will explode as the number of jobs is increased. This is mainly due to the explosion of the computation time for model SMSW while for model SMSS, the CPU time will only moderately increase with the problem size.
D . Produci?zg Good Foimlllatio?zs With Variable Redefinition In addition to the use of cutting planes, an opposing avenue of research about good problem formulation~has pursued the idea of incorporating theorems or propositions regarding the structure of the optimal solutions into a problem's model. In the past, academic research has often resulted in conditions andlor relationships that decision variables of particular, well-known problem formulations should satisfy at the optimum. Those relationships are described in theorems and proofs. Historically, people have tried to exploit these conditions in order to curtail the search for optimal solutions using enumeration methods such as branch and bound. Recent research focuses on developing new problem formulations that implicitly incorporate those optimality conditions. Models that incorporate those additional conditions will be much tighter as there is extra information imbedded in them. The development of the tight formulation for the single facility, multi-item uncapacitated lotsizing problem using variable redefinition (Martin (1987) and Eppen and Martin (1987)) is a nice illustration of this powerful technique. A distributor faces weekly demands for his products. We assume that the total number of products is n and that the total number of weeks is m. Assume also that the demand d,, for each product k (k = 1,2, ..., n) in week t is satisfied at the end of the week. H e can fulfil1 the demand for product k in week t either from invento~ybuilt up previously, i,,.,, or from ordering in the beginning of the week. If he places an order, his supplier delivers the items in time such that he can satisfy the demand of the current week and possibly build up inventory to satisfy later demands. Ordering product k in a particular week costs a fixed amount F, independent of the number of units ordered. The variable cost (cost per unit) is not important for this problem. Keeping an item in inventory costs an amount h, per unit of product k per week held in inventory. The problem of the distributor is when and how rnzlch to order from the supplier such that he incurs a minimum total ordering and inventory holding cost. Clearly, if he orders a product every week, he will pay the fixed ordering cost each week but he does not incur any inventory holding costs. However, if he orders many units at once such that he can cover demand of several consecutive weeks, he avoids paying the fixed ordering cost each week but his inventory holding cost increases. Depending on the costs and the demands of the products, there is an opti-
mal time interval between orders that minimizes total ordering and inventory holding costs. Let the variables y,, = 1, if the distributor orders product k in week t, 0, otherwise and X,, be the number of units of product k ordered in week t (k = 1, 2,..., n; t = 1, 2,..., m). A valid formulation for this decision problem is then as follows: n
(LSW) min
m
CC (F, *y,
+
h,*i,)
k=lt=l
subject to
ihi.l + xkt - dkt = i, Xh
5 M*Y,
X,, 2
0, Y,
E
(0, 11
In model LSW, the objective function (II.D.l) to be minimized is the total ordering and inventory holding cost for all products over all weeks. The inventory balance equations (II.D.2) model the fact that for each product k and week t, the ending inventory of the previous week i,,.,, plus the number of units ordered in the beginning of the week X,,, minus the demand at the end of the week d,,, must be equal to the inventory at the end of the week, i,,. The variable upper bound constraints (II.D.3) enforce the condition that for each product k in each week t, in case the distributor does not order (y,, = 0) then the number of units ordered should also be zero (X,, = 0). However, in case the distributor orders (y,, = l ) , any number of units is allowed. Again, as in (II.C.3) the big M is a large constant which value here should be at least
r=t
d,,
for product k in week t. The data for a small example
problem with two products (n = 2) and five weeks (m = 5 ) is given in Table 4. TABLE 4 Dntn f o ~the lorszziilgproblem Prod. 1 Prod. 2
Demand 1 60 40
Demand 2 100 60
Demand 3 140 100
Demand 4 200 40
Demand 5 120 80
F h 260 2 200 1
Solving model LSW for the example data in Table 4 with EINDO gives the following computer output:
CUMULATIVE CPU TIME IN HR:MIN:SEC =
0: 0:16.53
LP OPTIMUM FOUND AT STEP 18 OBJECTIVE VALUE = 998.653200 NEW INTEGER SOLUTION O F 2200.00000 AT BRANCH 7 PIVOT 39 BOUND ON OPTIMUM: 1076.153 NEW INTEGER SOLUTION O F 19S0.00000AT BRANCH 18 PIVOT 71 BOUND ON OPTIMUM: 1076.153 NEW INTEGER SOLUTION O F 1900.00000AT BRANCH 30 PIVOT 108 BOUND ON OPTIMUM: 1076.153 NEW INTEGER SOLUTION O F lSSO.OOOOO AT BRANCH 61 PIVOT 213 BOUND ON OPTIMUM: 1076.153 ENUMERATION COMPLETE. BRANCHES= 73 PIVOTS= 241 LAST INTEGER SOLUTION IS THE BEST FOUND RE-INSTALLING BEST SOLUTION ... CUMULATIVE CPU TIME IN HR:MIN:SEC =
0: 0:23.84
OBJECTIVE FUNCTION VALUE 1) 1S80.0000 VARIABLE Y23 Y21 Y14 Y13 Y11 I11 I14 I21 I23 I24 X1 1 X21 X13 X14 X23
VALUE 1.000000 1.oooooo 1.000000 1.oooooo 1.000000 100.000000 120.000000 60.000000 120.000000 80.000000 160.000000 100.000000 140.000000 320.000000 220.000000
REDUCED COST -240.000000 200.000000 260.000000 260.000000 260.000000 .000000
.oooooo .000000 .000000
.oooooo .oooooo .000000
.oooooo .oooooo .oooooo
The LP relaxation value is 998.6532 while the optimal total cost is 1880. This effectively results in a gap of 88.25%! Observe that the model took about 7.5 secoi~dsto solve on the 486, 66 MHz PC. Again,
this is a prohibitively large amount of CPU time such that this formulation cannot be used for solving large realistically-sized problem instances. Moreover, similar to formulation SMSW, the computation time for formulation LSW will explode with only modest increases in the number of products and weeks. Already a few decades ago, Wagner and Whitin (1958), discovered that for this lotsizing problem there exists a peculiar condition that is satisfied at the optimum by the decision variables X,', and i,,-,. They proved that in a particular week t for any product k, the demand will be satisfied either from ordering in the beginning of the week or from the ending inventory of the previous week but not both. Mathematically, the "Wagner - Whitin Theorem" can be written as X,, ':i,,., = 0, implying that at least one of both variables must be zero! Moreover, if X,, = 0 then i,,., should be at least d,, and if i,,.: = 0, then X,, should be at least d,,. Continuing this reasoning leads to the insight that in case the distributor orders, he will do so to cover the demand of a consecutive number of weeks completely! For example, observe that in the optimal LINDO output the lotsize for item 1in week 1 is 160 items (X11 = 160) which is exactly the demand for this product in week 1 and week 2. Recently, Eppen and Martin (1987) have derived a tight formulation for the multi-item lotsizing problem LSW, by incorporating the Wagner - Whitin theorem implicitly into their model. Martin calls his technique variable redefinition because it makes use of a new variable z,,, being the fraction of the demand for product k from week t till week r that will be satisfied from an order placed in week t. In order to illustrate the definition of the new variable more clearly, remark that the total demand for product 1from week 2 till week 4 is 440 FIGURE 3 Lotslze declsloiz ~zetwoi-k
+
+
( = 100 140 200). The new variable z,,, will then be a percentage of 440 that will be satisfied from an order placed in the beginning of week 2. The interpretation of z,,, can nicely be illustrated in a network which is shown in Figure 3 for a three week problem. There is such a network for every product k. The arcs have the same interpretation as the variables z,,,. The nodes represent the time periods (weeks). In addition to the variable z,,, there is also a cost c,,, associated with each arc in the product network. This cost solely consists of the inventory holding cost that will be incurred by the production decision of the arc. In general, this cost can be written as follows:
A production decision requires finding a least cost path through each product's network. Therefore, the new formulation is also called a network flow formulation. It is a well-known result that the LP relaxation of a network flow model provides integer optimal solutions. This will result in optimal LP relaxation values for the variables z,,, that will be naturally integer. As such, the Wagner - Whitin theorem, stating that in case an order is placed it will be for an amount that satisfies an integral number of weeks' demand, is implicitly incorporated. The tight formulation of Eppen and Martin is as follows: ILSS) min
92
F,
* y,
k=lr=l
+
222 c h * z h k=li=lr=t
subject to
2
ZkJ
2
Ykt
k = ~ ..., , n ; t = l , ..., m
(II.D.lO)
k = 1,..., n ; t = I ,..., i n ;
(II.D.ll)
i = t
zkrr 2 0, Y~
E
(0, 1)
In model LSS, the objective function (II.D.6) represents the total ordering and inventory holding cost to be minimized. Constraints (II.D.7) - (II.D.9) ensure that a path is constructed by sending one unit of flow through each product's production decision network. This unit of flow will travel along a least total cost path. In particular, constraints (II.D.7) force the ilnit of flow to leave the start node. Constraints (II.D.8) indicate that whenever the unit of flow enters node t, it is also required to leave this node, they are also called flow conservation' constraints. Constraints (II.D.9) collect the unit of flow back at the last node. Constraints (II.D.lO) model the proper relationship between the ordering and the production decisionvariables, i.e. the distributor can only consider buying units if an order is placed. Finally, constraints (II.D.ll) ensure the proper nonnegativity and integrality conditions. Solving the mode! LSS with the data given in Table 4 gives the fo!!owing LIr\rDO output: CUMULATIVE CPU TIME IN HR:MIN:SEC = 0: 0: 9.61 LP OPTIMUM FOUND AT STEP 30 OBJECTIVE VALUE = lSSO.OOOOO ENUMERATION COMPLETE. BRANCHES= 0 PIVOTS= 30 LAST INTEGER SOLUTION IS THE BEST FOUND RE-INSTALLING BEST SOLUTION ... CUMULATIVE CPU TIME IN HR:MIN:SEC = 0: 0:10.98 OBJECTIVE FUNCTION VALUE
1) VARIABLE Y23 Y21 Y14 Y13 Y11 2112 1.0 2145 2212 2235 2133
1880.0000 VALUE 1.000000 1.000000 1.000000 1.OOOOOO 1.000000 00000 1.000000 1.000000 1.000000 1.000000
REDUCED COST .000000 200.000000 260.000000 260.000000 260.000000
.oooooo .oooooo
.000000 .oooooo .000000
As already mentioned, observe that the model is perfectly tight! Martin (1987) proves that this is always the case. Moreover, the problem took about one second and a half to solve on the 486, 66 MHz PC. Again, this makes mathematical programming with formulation LSS a viable technique for solving large real-life single facility multi-product uncapacitated lotsizing problems.
III. SOME ILLUSTRATIONS O F RECENT DEVELOPMENTS IN OPTIMIZATION SOFTWARE Although the last decade has seen a tremendous surge in algorithmic improvements in optimization software for solving linear programs and ILPs, li would like to focus on the recent emergence of strzlctured modeling lnrzgunges. This software tool has an enormous potential for boosting the use of mathematical programming in day to day business operations. Many %PSsolved in practice contain thousands of constraints and decision variables. Very few users of linear programming would want to input the constraints and objective function each time such an LP is to be solved. For this reason, most actual applications of LP use a matrix gerzerntor to simplify the inputting of the LP. A matrix generator allows the user to input the relevant parameters that determine the LP's objective function and constraints; it then generates the LP formulation from that information. For example, let's consider the single facility multi-product lotsizing problem LSS, introduced in section 1I.D.The total number of variables is (n"'m(m'1)/2) and the total number of y,, variables is (n"m); the total number of constraints is ( l +n+2(n"m)). Thus, for a 10 product (n = 10), 50 week (m = 50) problem, the problem would involve 13,250 variables and 1,011 constraints, clearly too many for convenient input. A matrix generator for this problem would require the user to input only the following information for each product: ordering cost and inventory holding cost and for each product - week combination: demand. From this information, the matrix generator would generate the LP's objective function and constraints (model LSS), call up the LP solver such as LINDO and solve the problem. Finally, an output analyzer would be written to display the output in a user-friendly format.
A. The LINGO Package LINGO (Cunningham and Schrage, 1992) is an example of a sophisticated matrix generator; it is even much more! LINGO is a structured optimization modeling language that enables the user to create many (even thousands) of constraints and objective function terms by typing just one single line. To illustrate how LINGO works, we will solve the LSS problem for the data set given in Table 4. The LINGO model for the example problem is as follows: MODEL: l]! Strong Formulation for the Single Facility Multi-Product 21 Uncapacitated Lotsizing Problem; 31 4lSETS: 51 item 1 1..2 1: 61 F, ! fixed ordering cost for item & l ; 71 h; ! inventory holding cost for item & l ; S] week1 1.SI : ; 91 ixw( item, week): 101 d, ! demand for item & l in each week &2; 111y ; ! = 1, if we order the item & l in the week &2, 0, otherwise; 121 ixwxw( h, week) 1 &3 #ge# &2: 131 z, ! = fraction of demand of item & l from week &2 till week &3 satisfied from an order placed in week &2; 141 151 c ; ! = inventory holding cost associated with variable z; 16lENDSETS 171 181DATA : 191 F = 260,200; 201 h = 2, 1; 211 d = 60, 100, 140, 200, 120, 40, 60, 100, 40, SO; 221 23lENDDATA 241 25]! compute the inventory holding cost; 26]@ for( ixwxw( k, t, r) : 271 c( k, t, r) = @sum( week( 1) I l#ge#t #and# l#le#r : ( 1 - t) * h( k) * d( k, 1)) ;); 281 291
30]! objective: min total ordering and inventoiy holding costs: 311min = @sum( ixw( k, t) : F( k) " y( k, t)) + @sum( ixwxw( k, t, r) : c( k, t, r) " ( k, t, r)) ; 321 331 34]! send one unit of flow through each product's network; 35]@for( item( k) : 361 1 = @sum( week( S): z( k, 1, S)) ;); 371 38]! conservation of flow; 39]@for( ixw( k, t) I t #It# 5 : 401 @sum( week( r) I r #le# t : z( k, r, t)) = 411 @sum( week( S) I 1- #ge# t + 1 : z( k, t + l, S)) ;); 421 431! recuperate the flow at the end node; 44]@for( item( k) : 451 @sum( week( r): z( k, r, 5)) = 1 ;); 461 47]! relation ordering and order quantity; 48]@for( ixw( k, t) : 491 @sum( ixwmv( k, t, r) : z( k, t, S)) < y( k, t) ;); 501 51]! define the ordering variables 011 ; 52]@for( ixw : @gin(y) ; @bnd( 0, y, 1) ;); 531 END The description of the problem modeled is given in lines 1 and 2. Everything between an exclamation point (!) and a se~nicolon(;) is comment and therefore not interpreted by LINGO. To begin setting up a LINGO model, the SETS section defines the problem by describing what it is all about. There are twopvimitive sets: item (line 5) and week (line 8). Two atti.ibutes are definecl over the set item, namely F and h respectively the fixed ordering cost and thc weekly per unit inventory holding cost of each item. As LINGO models will be read often by many different people concel-iled with the problem, it is well advised to indicate the attribute definition in the program. There are also two derived sets: ixw (line 9) and ixwxw (line 12). The derived set ixw is a cross-product of the primitive sets item' and week and has two attributes d and y defined over it. The second derived set ixwxw. is a cross-product of the derived set ixw with the primitive set week. In
addition, there is the condition that the third index of this set should be equal to or larger than the second index in this set; in accordance to the definition for z,,, where we had that r = t, ..., m. ENDSETS ends the specification of sets needed to define the problem. In the DATA section of the LINGO program, the user can supply all the necessary data for the problem. Verify that we have correctly input all the data of Table 4. In light of the current tendency to separate the model from its data, LINGO permits the user also to read the data from an external file. This makes it possible to use the same program for different data sets. ENDDATA closes the data section. In lines 25 to 28 of the program, we compute the inventory holding cost c,,, associated with the order quantity decision variables z,,,. The reader should recognize formula (II.D.5). It is customary to give a short explanation of the meaning of the instructions that follow (line 25). The EINGO functions are prefixed by the add sign (@). The instructions (lines 26 - 28) read pretty much like a novel! A possible description goes along the following lines: for all products k and time periods t and r, elements of the set ixwxw, compute c,,, as the sum over all weeks 1 such that l is equal to or greater than t and 1 is less than or equal to r, (1 - t) hLVd,,. Lines 30 to 52 give then formulation ESS, observe the correspondence between the mathematical notation (II.D.6) - (II.D.ll) and the LINGO model. Each set of instructions is accompanied by an explanatory comment line. Again, it is possible to give a description in words of the syinbolic LINGO instructions. For example, lines 44 - 45 could be read as follows: for every item k, generate the constraint, sum over all weeks r, zklSand put this sum equal to one. Observe in line 52 that we have omitted the indices of the set ixw because the instructions are required for all the indices of the set. Remark also that a semicolon ends each EINGO instruction and that brackets have to be closed appropriately! In addition to solving the problem, LINGO allows to generate the ILP inodel based on the few instructions provided. The ILP generated with LINGO for the given data is as follows:
SUBJECT T O 32)- 2111 - 2112 - 2113 - 2114 - 2115 = - 1 33)- 2211 - 2212 - 2213 - 2214 - 2215 = - 1 34) 2111 - 2122 - 2123 - 2124 - 2125 = 0 35) 2112 + 2122 - 2133 - 2134 - 2135 = 0 36) 2113 + 2123 + 2133 - 2144 - 2145 = 0 37) 2114 + 2124 + 2134 + 2144 - 2155 = 0 38) 2211 - 2222 - 2223 - 2224 - 2225 = 0 39) 2212 + 2222 - 2233 - 2234 - 2235 = 0 40) 2213 + 2223 + 2233 - 2244 - 2245 = 0 41) 2214 2224 + 2234 + 2244 - 2255 = 0 42) 2115 + 2125 2135 + 2145 + 2155 = 1 43) 2215 + 2225 + 2235 + 2245 + 2255 = 1 44)- Y11 Z l l l + 2112 + 2113 + 2114 + 2115 a 0 45)- Y12 + 2122 + 2123 + 2124 + 2125 S 0 46)- Y13 + 2133 + 2134 + 2135 S 0 47)- Y14 2144 + 2145 a 0 48)- Y15 + 2155 S 0 49)- Y21 + 2211 + 2212 + 2213 + 2214 + 2215 a 0 50)- Y22 2222 + 2223 + 2224 + 2225 S 0 51)- Y23 + 2233 2234 2235 S 0 52)- Y24 + 2244 + 2245 S 0 53)- Y25 + 2255 0 END INTE Y25 INTE Y24 INTE Y23 INTE Y22 INTE Y21 INTE Y15 INTE Y14 INTE Y13 INTE Y 12 INTE Y11
+
+
+ +
+
+
+
The solution to the problem above is provided in section 1I.D. Once the LINGO model has been specified, it becomes easy to generate large realistically-sized problems. It suffices to change the diinensioils of the primitive sets appropriately and provide the necessary data. IV. ECBNCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCH The developinents in mathematical progranuning during the last decade underlies the recent explosion of its use for solving day to day managerial decision problems. In companies around the world and more specifically in the US, we have experienced the emergence of islands of optimization. The use of cutting planes and the implicit modeling of theorems regarding the structure of an optimal solution through e.g. variable redefinition have allowed us to solve problems that were previously believed to be unsolvable. The introduction of structured modeling languages such as LINGO has allowed us to generate models quickly. The greatest co~ltributionresults from the joint use of better problem formulations and structured modeling languages. Using LINGO it becomes easy to generate various different formulations for the same problem thereby allowing both the researcher and the practitioner to quickly evaluate the quality of the different ILP formulations with respect to the resulting gap. Moreover, tight cutting planes and the use of variable redefinition are ideas that are highly problem specific. Each different problem has a different particular set of tight cuts and for each problem there exists a "best" variable definition. Consequently, as the applications of mathematical programming contiilue to explode into the future, academics and practitioners alike will continuously be facing an abundant set of research challeilges and opportunities. REFERENCES Cunningham, I<. and L. Schrage, 1992. LINGO: An Optimization Modeling Language (LINDO Systeins Inc., Chicago). Degraeve, Z., 1990, Strong Formulations for the Job-Shop Scheduling Problem. Working Papcr Series, (The University of Chicago; Graduate School of Business). Degraeve, Z., 1992, Scheduling Joint Product Operations with Proposal Gene~.ationMethods. PhD dissertation. (The University of Chicago, Graduate School of Business). Eppen, G.D., Martin, R.K. and L. Schrage, 1989. A Scenario Approach to Capacity Planning, Opel.ntions Research 37, 4 , 517-527. Eppen, G.D. and R.K. Martin, 1987; Solving Multi-Item Capacitated Lotsizing Problems Using Variable Redefinition, Operntioils Research 35, 6 , 832-848.
Herroelen, W.. 1987. Production Scheduling and Sequencing, Class notes, (MBA program KULeuven). Martin, R.K., 1987. Generating Alternative Mixed-Integer Programming Models Using Variable Redefinition. O~erntiorzsResenrclz 35. 6 , 520-831. Nemhauser, G.L. and L.A. Wolsey. 1988, lnteger and Combinatorial Optimization (John Wiley & Sons Inc.. New York). Schrage, L., 1993, LINDO: An Optimization Modeiiilg System (LINDO Systems Inc.. Chicago). Wagner. H.M. aiid T.M. Whitin. 1958, Dynamic Version of thc Economic Lotsize Model, Mnrzngenzeirt Scier~ce5, S9 - 96.
Tijdschrift voor Economie en Management Vol. XXXIX. 4, 1994
Het naveling Salesman Problleem toegepast op Order Picking door L. GELDERS:': en D. HEEREMANS':
Dit artikel behandelt een toepassiilg van het hai~delsreizigersprobleem, beter gekend als het "Traveling Salesman Problem" (TSP). Dit probleem omvat het bepalen van de kortste weg doorheen een aantal opgegeven punten. Deze punten kunnen bijvoorbeeld steden zijn welke door een handelsreiziger aangedaan moeten worden. Vandaar de benalning van het probleem. Het TSP is een uitzonderlijk bekend en diepgaand onderzocht probleem uit de operations research. Het op eerste zicht relatief eenvoudig uitzieilde probleem, blijkt immers een uitzonderlijk moeilijk oplosbaar combinatorisch probleem te zijn, vooral indien men voor eel1 probleein van realistische dimensie eel1 optimale oplossing wil garanderen. Een probleem met N steden blijkt imn~ers(N-l)! potentiele oplossingen te hebben. Alhoewel TSP-achtige problemen reeds door Euler, Hamilton en andere bekende wiskundigen onderzocht werden, duikt de benaming TSP toch pas op rond 1931-32. In 1948 begon de prestigieuze Rand Corporation het probleein te bestuderen, en de aandacht van de eerste pioniers van de lineaire programmering werd erdoor getrokken. E h van de belangwekkendste papers over TSP is van de hand van Dantzig (1954), tenvijl de branch-and-bound benadering door Little (1963) ontwikkeld werd. In de combinatorische optimisatie blijft het NP-harde TSP de aandacht trekken van de research community en verschillende formuleringen en oplossingstechnieken werden ontwikkeld (zie Lawler (1985) en Laporte (1992)). T e ~ i t r u ~Il~dustrieel n Beleid, K.U.Leuven.
Het TSP blijkt trouwens niet alleel1 relevant voor het hierboven geformuleerde handelsreizigersprobleem. Tal van andere combinatorische problemell blijken dezelfde struktuur te hebben als het TSP. Voorbeelden daarvan zijn de olnsteltijdreductie van machines, de routing van voertuigen, de bepaling van produktievolgordes, etc. In de praktijk is men meestal tevreden met een goede oplossing, ook al is ze niet optimaal. Vandaar het ontstaan van talrijke heuristische methoden (heuristieken) die snel een redelijke oplossing genereren. Bij de praktijktoepassing wordt echter de performantie van deze heuristieiten nogal eens over het hoofd gezien, of alleszins onvoldoende getoetst. Dit was ook het geval met de heuristiek in de case study die wij hieronder beschrijven. Dcze case study behandelt eer, intern routing p r ~ b l e e m Het . betreft de routing van arbeiders, ook order pickers genoemd, doorheen een distributiemagazijn. In dit magazijn worden de door de klanten bestelde goederen door de arbeiders gepicltt. Deze goederen zijn er per type, in dozen gestockeerd. D e arbeiders dienen zich dus door het rnagazijn te bewegen om de benodigde goederen uit de dozen te halen. Hun werktijd kan dus ingedeeld worden in wandeltijd en pickingtijd. De bedoeling van de studie is het bepalen van een rnethode welke de wandeltijd tot een minimum herleidt. Wanneer we een constante wandelsnelheid veronderstellen is dit de methode die resulteert in de kortste afgelegde weg door de arbeiders.
Zoals hierboven reeds vermeld behandelt de case study een order picking probleem in een rnagazijn. In dit magazijn wordell per klantenorder de benodigde goederen gepickt en in dozen geplaatst. Deze taak wordt uitgevoerd door 30 pickers. Zij beschikken hiertoe over evenveel karretjes die ze voorttrekken en waarop de dozen geplaatst worden. Deze karretjes zijn uitgerust met computers die de te volgen weg aanduiden. Het magazijn zelf is voorgesteld in Figuur 1.Het bestrijkt een oppervlakte van ongeveer 2700 m'. D e verschillende goederen zijn per type in dozen gestapeld. Hieruit dient de piclter de gevraagde hoeveelheid te nemen. Deze goederen bevinden zich in gangen. Het magazijn telt 4 gangen met een lengte van 65 m en 9 dwarsgangen met een lengte van 40 m.
FIGUUR 1 Luyorlt van het mngnzijn
50 m mstelt 25 picklocaties voor : 5 hor. x 5 vert.
Nu zal eerst de afiandeling van een order door een arbeider in het rnagazijn beschreven worden. Wanneer de arbeiders met een nieuw order starten, nemen ze een lege doos van de transportband (Figuur 1).De barcode die op de doos aanwezig is, wordt vervolgens ingescand. Dit laat de computer (op het karretje) toe om te bepalen op welke plaatsen in het magazijn er allemaal goederen opgehaald moeten worden. Daari~abegint de arbeider met het vullen van de doos. De volgorde waarin de verschillende items die in de doos moeten, uit de relcken gehaald worden, bepaalt de weg die de arbeider dient af te leggen. Deze volgorde wordt bepaald door de computer. Mij geeft de
arbeider telkens de volgende locatie aan waarheen deze zich dient te begeven. De manier waarop de computes deze volgorde bepaalt is de kern van onze studie. Dit zal later besproken worden. Wanneer alle items gepickt zijn, moet de doos terug op de conveyor geplaatst worden. De route die de arbeider per klantenorder aflegt, start en eindigt dus steeds bij de conveyor. Het heuristisch algoritme dat de computer gebruikte voor het uitvoeren van de studie veronderstelde een opgelegde looprichting voor elke gang. Dit impliceert dat wanneer een item in deze gang gehaald moest worden, de gehele gang doorlopen inoest worden, terugkeren lton immers niet. Daarenboven werden ook de dwarsgangen in stijgende volgorde doorlopen. Een voorbeeld van een resulterende toes is voorgesteld in Figuur 2. FIGUUR 2 Voorbeeld llilil eer~toe? nzer de hez~rzstlek 143,3 n~
Het probleem kan geformuleerd worden als een handelsreizigersprobleem, daar we een optimale toer wensen te bepalen doorheen een aantal punten. Weliswaar is er een extra beperking, nalnelijk dat de toer start en eindigt aan de conveyor. Door het oplossen van dit TSP kan men dan de prestatie van de tot dan toe gebruikte heuristiek gaan onderzoeken.
111. PROBLEEMAANPAK O m het handelsreizigersprobleem op te lossen, dienen we uiteraard de afstanden te kennel1 tussen alle mogelijke gunten. Aangezien we hier te maken hebben met meer dan 5800 locaties zou dit resulteren in een zeer grote afstandsmatrix. Daarom werd besloten om het probieern te vcreenvoudigen. E e r b ~werden aiie items die zich boven elkaar bevonden, beschouwd als 1 item. Er is imrners geen verschil in wandeltijd en dus in afgelegde weg wanneer deze items gepickt moeten worden. Verder wol-den alle goederen die zich in eenzelfde gang bevinden, verondersteld om in het midden van deze gang gepickt te worden. Aangezien er 66nrichtingsverkeer opgelegd wordt, dient sowieso de gehele gang doorlopen te worden wanneer er in deze gang iets gehaald moet worden. De fout die gemaakt wordt bij deze vereenvoudiging is dan ook alleen te wijten aan de meerafstand die men aflegt, wanneer men aan weerszijden van de gang iets moet ophalen (zigzaggen). Deze veleenvoudigingen geven aanlelding tot een sirnpeler layout (Figuur 3). Men krijgt dan een herleiding van het probleem naar een TSP met 22 locaties, wat heelwat gemakkelijker oplosbaar is. O m de in de firma bestaande, geprograrnincerde heuristiek te vergelijken met de optiinale oplossi~lgwerden twee versies van het TSPprobleem opgelost. Bij de eerste versie werd de beperking i.v.m. eenrichtingsverkeer behouden, bij de tweede werd deze niet meer in rekening gebl-acht. Dit komt neer op het oplossen van hetzelfde probleem, maar met verschillende afstandsmatrices. De ingevoerde vereenvoudigingen brengen dan we1 grotere foutcn met zich mee, maar deze middelen uit over het aantal orders. De heuristiek en de twee versies van het TSP-algoritme wcrden toegepast op drie testsets. Deze teshets oinvatlen respectievelijk 22,35, en 36 orders. De goedereil die op deze orders voorkwamen, werden eerst omgezet in punten terug te vinden op de vereenvoudigde lay-
FIGUUR 3 Velre~~vozrdzgde layout iJanlzet rnagLlzzln
U
_9
---F
r 2
Transportband
CICIl
out. Vervolgens werd het TSP-algoritme en de heuristiek toegepast orn de kortste route doorheen deze punten en gaande via d e conveyor te vinden. Het TSP werd opgelost aan de hand van het algoritme van Little, welk uitgebreid beschreven wordt in Smith (1982). Dit is een exacte oplossingsmethode voor het TSP. Dit algoritme werd geiinplementeerd in Pascal. D e berekeningen werden uitgevoerd op een 386 DX PC en namen enkele seconden per rit in beslag Uit de bekomen resultaten bleek dat de afgelegde weg gemiddeld met ongeveer 9% kon verminderd worden door gebruik te maken het TSP-algoritrne. Wanlieer het eenrichtingsverkeer opgeheven werd, kon zelfs een verbetering van 40% vastgesreld worden. Voor sommige orders verminderde de afgelegde weg zelfs met respectievelijk 47% en 53%. Een voorbeeld van zo een route is weergegeven in Figuur 4. In 63% van de onderzochte orders gaf de bestaande heuristiek ech-
Iu
D
Transportband
I
-
E
c
q
Totale afgelegde weg bedmagt 172,6 meter.
Totale afgelegde weg bedmgt 114 mcter.
ter d e optimale oplossing. Dit waren meestal de zeer eenvoudige orders waar slechts op ellkele plaatsen goederen moesten gepickt worden. Wanneer we deze orders emit filteren presteert het TSP-algoritrne gemiddeld 20% beter (45% zonder eenrichtingsverkeer) dan de bestaandc heuristiek.
Gebaseerd op de resultaten van het onderzoek, kan het gevolg op de efficientie van de esploitatie bekeken wordeil. Een verlni~ldering van de af te leggel: weg met 9 % impliceert een daling van de wandeltijd per picker met 9%. Deze wandeltijd werd op 40% van de totale werktijd geschat, wat overeenltomt met 8uldag x 60minlu X 0.40 = 192 minldag. Een vermindering tot 19211.09 = 175 minldag is mogelijlc, wat neerkomt op een tijdswinst van 16 minlmandag. Aangezien er 30 pickers actief zijn, geeft dit een winst van 30 x 16 mill = 4-80 mill = 8 U. E r zal dus 1 picker minder nodig zijn o m dezelfde activiteiteil uit te voeren, wat neerkomt op een winst van 1 Mi BF per jaar. D e opheffing van het 66nrichtingsverkeer blijkt zeer belangrijk. Een vermindering van de wandeltijd met 42% kan hierdooz- ver-wezenlijkt worden. Dit komt neer op 30 X 8uldag x 0.40 x 0.42 = 40 uldag. Het aantal pickers zou door deze wijzigillg dus met 5 verminderd kunnell worden. Dit betekent eel1 kostellreductie van ca. 5 Mi BF pe,r jaar. D e implernentatie van het traveling salesman algoritme kan dus tot substantiele verbetering van de magazijnexploitatie leiden in vergelijking met de bestaande heuristische procedure.
Dantzig, G.B., Fulkel.son. D.R., Johiison, S.M., 1954, Solution of a Large-Scale TravelingSalesman Problem, Operntioi~sResertrch 2, 393-410. Laporte, G.. 1992, The Traveling Salesillan Problem: An Ovei-viewof Exact and Approxinlatc Algorithms, Europeni~Joru.izn1 of Operntio?~~ Reseni~ll59, 231-247. Lawler, E.L., Lenstra. J.IC.. Rinnooy Kail,A.H.G., Shmoys, D.B., 1985. The Traveling Salcsman Problem (J. Wiley. Chichester). Little; J.D.C.. Murty; K.G.. Sweeney D.W., Karel C., 1963, A11 Algorithm for the X-aveling Salesman Problem. 0~1erniiot~s Researc.17 11; 972-989. Si~lith,D.K., 1952. Network Optimisation Practice - A Computatioiial Guide (Ellis I-Iorwood Ltd., Chichester). Van Winckel. F., 1990, Lineaire programmatic eil aallvenvalite inethoden (Acco, Lcuven).
Tijdschrift voor Economic cn M a n a g e m e n t Vol. XXXIX, 4>1994
Resource-Constrained Project Scheduling A View on Recent Developments by W.S. HERROELEN:::and E.L. DEMEULEMEESTER"
Scheduling and sequencing is concerned with the optimum allocation of scarce resources over time. Scheduling deals with defining which activities are to be performed at a particular time. Sequencing concerns the ordering in which the activities have to be performed. The allocation of scarce resources over time has been the subject of extensive research since the early days of operations research in the mid 1950s. The result is a vast and not easy to digest literature and a considerable gap between sclaeduling theory and shop floor practice. Practioneus blame scheduling theoreticians to spend scarce research money for studying toy problenas such as sequencing a set of simultaneously available unordered jobs with lcnown durations on a never failing machine in order to optimize irrelevant objective f~~nctions. Theoreticians blame practitioners for their ignorance about the recent development~,theii- reluctance in applying useful theory, or their over-enthusiasm in applying scheduling procedures miles away from their natural field of application. Despite this mutual 'interest', the I and scheduling and major issues largely remain unresolved ~ I practice, sequencing problems remain the subject of intensive research. All this does not come by surprise. Scheduling and sequencing theory, more than any other field in the area of opcrations management and operations research, is characterized by a virtually unlimited num': Department of Applied Economics, 1C.U.L e w e n . This research was supported by N.F.W.0.-F.K.F.O. Project No. 2.0051.94, which is gratefully acknowledged.
389
ber ofproblem types. The terminology arose in the processing and manufacturing industries and most research has traditionally been focused on deterministic machine scheduling (see the books by Baker (1974), Bellmann et al. (19S2), Conway et al. (19471, French (1982), Herroeleli (1991), h4orton and Pentico (1993), and Rinnooy Man (1974)). 111 this context the type of resource is a machine that can perform at most one activity at a time. The activities are commonly referred to as jobs, and it is usually assumed that a job is processed by at most one machine at a time. The processing of a job on a machine is called an operatiorz. The machine environment is quite diverse. In a single nznchine environment, each job has only one operation (one-phase productioi1). In apamllel machine environment each job also requires just one operation and that operation may be performed on any of the machines. There are three classes of problems depending on whether the maclzines are identical, unifol.m, or urzrelaied. When the machines are identical, the processing time of a job is the same on all machines. When the machines are uniform, the processing time varies as a function of a given reference speed. When the machines are unrelated, the processing time of a job again varies, but now in a completely arbitrary fashion. In multistage production, a job consists of a number of operations. Techrzologicalprecederzce constraitzts demand that each job should be processed through the machines in a particular order. For general jobslzop problems there are no restrictions upon the form of the technological constraints. When all the jobs share the same processing order we have a flow-shop problem. In the special case of an open slzop, each job has to be processed on each machine, but there is no particular order to follow. In open shops the schedule determines not only the order in which machines process the jobs, but also that in which the jobs pass between machines. Jobs are characterized by a ready time (release date) which denotes the time at which the job becomes available for processing. The time by which the job should be finished is called the due date. It is possible to consider situations were jobs Inay be splitted or not. Each operation takes a certain length of time, the processing tinze, to be performed. In addition, operations may suffer from sequence dependent set up times. The pelfor~~~arzce criteria are numerous: minimize schedule length (makespan); minimize mean (weighted) flow time; minimize mean or maximum lateness or tardiness (lateness is the difference between a job's completion time and its due date - the lateness for an early job
being negative; when a job is completed after its due date, it is tardy - tardiness being the maximum of zero and the lateness); minimize the number of tardy jobs; maximize throughput (number of jobs completed per time unit), etc.. Sometimes combined scheduling criteria are used: minimize mean Row time subject to no jobs late, search for the shortest mean flow time schedule, search for a schedule in which no job is early nor tardy (just-in-time), etc.. Problems can be studied in a static environment (all jobs simultaneously available) or in a dynamic environment (jobs have unequal ready times). Problems may be considered to be deterministic or stochastic. Over the years, several (irrealistic) assumptions of the basic machine scheduling problems have been relaxed. A natural extension involves the presence of additional resources, where each resource has a limited size and each job requires the use of a part of each resource during its execution. This leads us to the area of resource-colzstrained project schedr~lingwhich covers again a tremendous variety of problem types. Certain types of resources are depleted by use (e.g. nonrenewable resources such as money and energy). Resources may be available in an amount that varies over time in a predictable manner (e.g. seasonal labor) or in an unpredictable manner (e.g. equipment vulnerable to failure). Resources may be shared among several jobs, and a job may need several resources. The resource amounts required by a job may vary during its processing, and the processing time itself could depend on the amount or type of resource allocated, as in the case of the above mentioned uniform or unrelated parallel machines. Over the past few years, extensive research efforts have resulted in new results on the level of problem classification and complexity, and new optimal and suboptimal solution approaches. This article aims at providiilg a guided tour through what we believe to be the most important recent developments in the area of resource-constrained project scheduling. We shall take a problem-oriented view, in the sense of addressing the issues that are of concern to both project management t h e o ~ yand practice. In making this tour, we shall explore the opportunity to highlight the results obtained over the past years by the Operations Management Group of the Department of Applied Economics of K.U.Leuven. The organization of this paper is as follows. Section 11focuses on an issue of extreme practical importance: the power and pitfalls of project management software. Section III briefly comments on the impact of complexity theory (NP-completeness theory, which aims at
classifying problems as 'easy' or 'hard') on project scheduling research. Section IV then concentrates on the recent development of optimal procedures for four types of resource-constrained project scheduling problems: the classical resource-constrained project scheduling problem (RCPSP), the generalized resource-constrained project scheduling problem (GRCPSP), the preeinptive resource-constrained project scheduling problem (PRCPSP), and the problem of minimizing resource availability costs. Section V focuses on the discrete time1 cost trade-off problem (DTCTP) and recent advances with optimal solutioil procedures. Section V1 reports on the issue of selecting performailce criteria and reports on recent developmeilts in the area of maximizing the net present value of project networks. The coinplexity issue is resumed in Sectioil VII, but now from a somewhat different perspective Managers and theoreticians have a definite sensation about the difficulty ill solving various instances of the scheduliilg problems discussed in the Sectioils IV-VI, which brings us to the recent advances in measuring the complexity of a project network through the use of the so-called complexity index. Section VIII is reserved f o ~ our overall conclusions. 11. PROJECT MANAGEMENT SOFTWARE A wide variety of project planning and control software packages for personal computer and workstations flood the market. Popular magazines and scientific journals regularly publish reviews (Assad and WasiI (1986), De Wit and Herroelen (19901, Edwards et al. (1984), Fresko-Weiss (1989), Hogan et al. (1985), Maroto and Tormos (1994)). Any software for project planning and control must satisfy a numbcr of minimal requirements imposed by the user. De Wit and Herroelen (1990) give a comprehensive and definitive expositioil of these demands, and give a detailed discussio~~ of the criteria against which project planning software should be judged. Their conclusions are illuminating. While many packages do offer resource planning features allowing the planner to obtain reports on the resource usage in specific time intervals, resource profile charts and cumulative resource usage charts, the resource monitoring capabilities are not only very primitive but often dangerously misleading. The authors do not recommend the use of the packages' suboptimizing resource inonitoring procedures to the project planner, except in those rare situations where the technical know-how and expertise are available to evalua-
te the precise algorithmic steps as well as their impact on the project schedule. Although the resource monitoring performance of commercial software seems to have improved somewhat in more recent comparative tests (Maroto and Torlnos (1994)), the results are neither impressive nor convincing.
III. COMPLEXITY THEORY For some scheduling problems, algorithms have been developed that can solve instances with thousands of jobs. A typical example is the shortest processing time rule which minimizes mean flow time in a deterministic static single-machine scheduling problem. For other problems, the best algorithms we have can only cope with a few jobs. Complexity theory proxiides !he mathematical framework which allows to classify scheduling problems as 'easy' or 'hard' (for a review see Shmoys and Tardos (1993)). A computational problem can be viewed as a function f that maps each input x in some given domain to an output f(x) in some given range. Complexity theory is interested in studying the time required to compute f(x) as a function of the length of the encoding of the input X , denoted PI. The efficiency of an algorithm that computes f(x) on input x is measured by an upper bound U(n) on the number of steps that the algorithm needs on any input X with n=PI U(n) = O(g(n)) if there exist constants c and no such that U(n) <= c.g(rz)for all n >= no. A problem is considered 'easy' if there exists an algorithm for its solution which has running time U(n)which is bounded by a polynomial function of n ; i.e., U(n) = O(nk)for some constant k. For any minimization problem f, there is an associated decision problem, the output of which answers the question 'Is f(x) <= h?' for any given h. Let P denote the class of decision problems that can be solved in polynoinial time. When a scheduling problem is formulated as a decision problem (e.g. 'Is there a feasible schedule that meets a due date?'), the 'yes' answer can be certified by a very small amount of information: the schedule that meets the deadline. Given this certificate, the 'yes' answer can be verified in polynomial time. Let NP denote the class of decision problems where each 'yes' inputx has a certificate y, such that bl is bounded by a polynomial in kl and there is a polynomial-time algorithm to verify that y is a valid certificate for X. It is generally conjectured that P does not equal NP. An NP-complete problem is a hardest problem in NP. in that if it would be solvable in polynomial time, then each problem inNP would
be solvable in polynomial time, so that P would equal NP. As such, the NP-completeness for a particular problem is strong evidence that a polynomial-time algorithm for its solution is unlikely to exist. An optimization problem will be called NB-hard, if the associated decision problem is NP-complete. Complexity results for the resource-constrained project scheduling problems are not encouraging: virtually all except the simplest problems are NP-hard (Blazewicz et al. (1983), Lawler et al. (1993)). The realization that virtually all resource-constrained project scheduling problems are intractable had an immediate impact: the evident use of optimal procedures of the branch-and-bound type and the recent interest in compu-search techniques for generating suboptimal solutions. The former are under review in the next section. The latter are just emerging and will receive some attention in our conclusions and directions for future research (Section VIII).
IV. SCHEDULING PROJECTS UNDER RESOURCE CONSTRAINTS Resource management problems in project networks appear in a wide variety (Herroelen (1972), Herroelen and Demeulemeester (1992)). One of the best known problems is the problem of scheduling project networks subject to resource constraints. The classical resource-csnstrained project scheduling problem (RCPSP) involves the scheduling of a project to minimize its total duration subject to zero-lag finishstart precedence constraints of the PERVCPM type and constant availability constraints on the required set of renewable resources. The RCPSP can be conceptually formulated as follows (Demeulemeester and Herroelen (1992a)): min f, L11 subject to
t = 1,2,...,f,, k = 1,2,...,K
where the problem parameters are the following:
H = set of pairs of activities indicating precedence constraints d, = processing time of activity i, i = 1,2,...,n. rik = amount of resource type k required by activity i. i=1,2,...n, l<=1,2,...K a, = total availability of resource type k, k=1,2, ...K and the problem outprlt consists of the following: fi = finish time of activity i, i = 1,2,...n S, = set of activities in process in time interval It-l,t] = {i [ f,-d, < t <= f,] Ht is assumed rhar acriviry i has a fixed pr.ocessing time di (set--up times are negligible or are included in the processing time). We further assume activity-on-the-node iletworks where activities P and n are dummy activities indicating the single start and end node of a project, respectively. The resource requirements si, are known constants over the processing interval of the activity. The availability of resource type B%, a,, is also a known constant during the project duration intervat. Eq. [2a] assigns a completion time of 8 to dummy start activity 1. The precedence constraints given by Eq. [2b] indicate that an activity j can only be started if all predecessor activities i are completed. Once started. activities run to completion (non-preemption condition). The resource constraints given in Eq. [3] indicate that for each time period ] t-l$] and for each resource type k, the resource amounts required by the activities in progress cannot exceed the resource availability. The objective function is given as Eq. [l].The project duration is minimized by minimizing the finishing time of the unique dummy ending activity n. The RCPSP, which is NP-hard (Blazewicz et al. 1983), has been extensively studied in the literature. Previous research on optimal procedures basically involved the use of integer programming procedures and implicit enumeration (dynamic programming and branchand-bound). For a comprehensive review, we refer the reader to Werroelen and Demeulemeester (1992). The depth-first solution procedure (Demeulemeester and Herroelen (1992a)) seems to be the fastest exact solution method for solving the WCPSP.
A. Tlze Demeulemeester and Hei7.oelerz (DH)procedure TheDH-procedure (Demeulemeester (1992), Demeulemeester and Herroelen (1992a)) generates a search tree, the nodes of which correspond to partial schedules in which finish times temporarily have been assigned to a subset of the activities of the project. The partial schedules are feasible, satisfying both the precedence and resource constraints. Partial schedules PS, are only considered at those time instants m which correspond to the completion time of one or more project activities. The partial schedules are constructed by semi-active timetabling. In other words, each activity is started as soon as it call within the precedence and resource constraints. A partial schedule PS, at time m thus consists of the set of tempomrily scheduled activities. Scheduling decisions are temporary in the sense that temp~rarilqischeduled activities may be delayed as a result of ciecisions made at later stages in the search process. Partial schedules are built up starting at time 0 and proceed systematically throughout the search process by adding at each decision point subsets of activities, including the empty set, until a complete feasible schedule is obtained. In this sense, a complete schedule is a continuation of a partial schedule. At every time instant m we define the eligible set E,,, as the set of activities which are not in the partial schedule and whose predecessor activities have finished. These eligible activities can start at time m if the resource constraints are not violated. Demeulemeester and Herroelen (1992a) have proven two theorems which allow the procedure, at decision point m, to decide on which eligible activities must be scheduled by themselves, and which pair of eligible activities must be scheduled concurrently.
Theorem 1. If at time m the partial sched~llePS,,, has rzo activity irz progress arzd an eligible activity i cannot be sched~iledtogether with any other unscheduled activity at any time instant m' >= r n without violating the precedence arzd resource constraints, then there exists arz optimal continuation of tlze partial s c h e d ~ ~with l e the eligible activity i p u t in progress (started) at time m. Theorem 2. If at time nz hep partial sched~ilePS,, has n o activiq i n yr-ogr.ess, if there
is an eligible activity i wlzich can be schedr4led corzcurrel-ztlywith only olze other urzscheduled activity j at any tirne irzstaizt nz' >= nz witl.zout violatirzgprecedence or resource co~zstrailzts,and if activity j is both eligible and not longer in duration than activiq i, then tlzere exists an optimal corztir~rlationof tlzepartial schedzlle in wlzich both activities i and j are pr l l in prog7.e~~ at time nz. If it is impossible to schedule all eligible activities at time na, a resource conflict occurs which will produce a new branching in the branchand-bound tree. The branches describe ways to resolve the resource coilflict by deciding on which combinations of activities are to be delayed. A delaying set D(p) consists of all subsets of activities D,, either in progress or eligible, the delay of which would resolve the current resource conflict at level p of the search tree. A delaying alternative D, is minimai if it does not contain other deiayiilg alternatives D, E D(p) as a subset. Demeulemeester and Herroelen (1992a) give the proof that in order to resolve a resource conflict, it is sufficient to consider only minimal delaying alternatives. One of the minimal delaying alternatives (nodes in the search tree) is arbitrarily chosen for branching. The delay of a delaying alternative D, is accomplished by adding a tempolzrl corzstmirlt causing the correspondiilg activities to be delayed up to the delaying point, which is defined as the earliest completion of an activity in the set of activities in progress, that does not belong to the delaying alternative. The delayed activities are removed from the partial schedule and the set of activities in progress, and the algorithm continues by computing a new decision point. The search process continues until the dumlny end activity has been scheduled. Every time such a complete schedule has been found, backtracking occurs: a new delaying alternative is arbitrarily chosen from the set of delaying alternatives D(p) at the highest level p of the search tree that still has some unexplored delaying alternatives left, and branching colltiilues from that node. When level zero is reached in the search tree, the search process is completed. Two dominance rules are used to prune the search tree. The first one is a variation of the well-known left-shift dominance rule, and can be stated as follows:
Theol.em 3. If the delay of the delaying alterr?ative at the previous level oftlze branch-
aizd-boi~ndtree forced a n acti1:ity i to become eligible at tirne nz, if the cr~wentdecisio~zis to start acfivi9 i at time n? arzd if activity i can be leftshifted without violating the yrecedeizce or resource constraints jbeca~ise activities in progress were delayed), tlzeiz the con.esporzdi~1gpartic11 sched r ~ l cis domirzated.
The second dominai~cerule is based on the concept of a cutset. At eveiy time instant m a cutset C,?,is defined as the set of unscheduled activities for which all predecessor activities belong to the partial schedule PS,. The proof of the following theorem can be found in Demeulemeester (1992) and De~neulemeesterand Hei-roeien (1992a): Tlzeorern 4. Corzsider a c ~ ~ t sG e t, , at time i n which contains the same activities as cl critset C , which wcrsprevio~~sly saved dziri~zgthe search of anotherpath irz tlze search tree. If time Ic was not greater than time 1n and if all activities in progress at tirm k did 11ot finish later than the maxinzzlnz of nz and the finish time of tlze corresponding activities in PS,,, then tlze current partial schedule PS,% is dominated.
The procedure has been tested with three lower bounding mles. The well-known remaining critical path length bound and critical sequence lower bound (Stinson et al. (1975)) are supplemented by an extended critical sequence lower bound which is computed by repetitively looking at a path of unscheduled, non-critical activities in combination with a critical path. The extended critical sequence lower bound calculation starts by calculating the Stinson critical sequence lower bound. This allows us to determine which activities cannot be scheduled within their slack time. Subsequently, all paths consisting of at least two unscheduled, non-critical activities, which start and finish with an activity that cannot be scheduled within its slack time, are constructed. A simple type of dynamic programming then allows us to calculate the extended critical sequence bound for evely noncritical path. The branch-and-bound procedure has been programmed in T ~ ~ r b o C Version 2.0 for a personal cornputer XBM PSI2 Model 70 A2l(or compatibles) running under the DOS operating system. The procedure solves all the 110 Patterson test problems (Parterson (1984)) in an average CPU time of 0.204 seconds and a standard deviation of 0.450 seconds. Since the original 1992 paper extensive computational
results have been obtained with the DH-procedure (see e.g. Demeulemeester et al. (1994a), D e Reyck and Herroelen (1993)), all with very encouraging results.
B. Extensions of the DH-procedure The DH-procedure has been successfully extended to the generalzzed resource-corzstmined project schedzllzrzg problenz (GRCPSP) and the preemptive resource-constrained project sclzed~llingproblem (PRCPSP). In the GRCPSP (Demeulemeester and Herroelen (1992b)) three of the basic assumptions of the RCPSP are relaxed. First of all, the GRCPSP allows for precedence relations of the precedence diagramming type - start-start, finish-finish, finish-start and start-finish constraints -with the restriction that activities are not allowed to start before one of their predecessors has started. Secondly, ready times and due-dates may be specified for each activity stating that the activity cannot be started earlier than its ready time and must be completed by its due date. East but not least, the resource availabilities may be variable over the project horizon. The PRCPSP (Demeulerneester and Herroelen (1992~))allows activities to be preempted at integral points of time. The procedure for the GRCPSP has recently been used for solving general producrion scheduling problems involving sequence-independent setup times as well as process and transfer batches. Using appropriate computations for the time lag associated with finish-start precedence relations between network activities and appropriate procedures for computing activity durations, Demeulemeester and Herroelen (1994) have demonstrated how a branch-and-bound procedure, originally developed for solving the GRCPSP, can be used for generating feasible solutions under various scenarios with respect to both the number and size of the process and transfer batches used.
C. Minimizing resource availability costs The branch-and-bound procedures for the RCPSP, the GRCPSP and the PRCPSP provide an answer to the following question: Civerz the project data and the resource availabiliries, what is the shortestproject length that can be obtained such that n o precedence or resource constraints are violated? A typical characteristic of the solution methodology is the fact that the search procedure is started with a high value
(infinity) for the upper bound on the objective value. The search space is then restricted to all solutions that have an objective value that is strictly smaller than the current upper bound. Each time a feasible solution is found (with an objective value that is necessarily smaller than the current upper bound), the upper bound is immediately updated to the value of the objective function for this solution. The search is then continued with a more restricted search space, containing all solutions with a project length that is at least one time unit smaller than the current upper bound, until all possible solutions have been considered, either explicitly or implicitly. Thus, the upper bouild directly effects the magnitude of the search space. Given the availability of an optimal solution procedure for the RCPSP, GRCPSP and PRCPSP, it should also be possible to build efficient procedures for the following decision problem: Gi1:e.n theprojecr data, the resource availnbilities and a m,axinzalproject length, does there exist a sol~ltionwith a project length that does not exceed the maximnlproject length and for which none of the precedence or resource constraints is violated? In the r.esozlrce availability cost minimization problem, a maximal project length is specified and the objective is to determine the cheapest resource availability amounts for which a feasible solution exists that does not violate the project due date. The solution approach involves the iterative solution of decision problems with different resource availabilities. The search strategy starts by determining the minimum resource availabilities required for the different resource types. These are derived from the solution of resource-constrained project scheduling decision problems, either with a single resource type or with two resource types (for all other resource types the availability is assumed to be infinite). Based on the solutioils to these problems (obtained by the DH-procedure and its extensions), the algorithm defines so-called eficielztpoirzts, which delimit the solution space of all possible combinations of resource availabilities that are not eliminated by solving the resource-constrained project scheduling decision probleins with one or two resource types. One the11 tries to solve the resource-constrained project scheduling decision problem that corresponds with the cheapest efficient point. If no feasible solution can be found with these resource availabilities, the efficient point is cut from the solution space, new efficient points are defined for which the resource availability for one resource type is one unit higher and the search is continued by trying to soive the resource-constrained project schedu-
ling problem that corresponds with the currently cheapest efficient point. This process is repeated until a feasible solution is found. The resource availabilities that correspond with the efficient point for which the first feasible solution was foun,d constitute the optimal solution. The computational experience obtained with the proposed algorithm shows it to outperform Mohring's procedure (Mohring 1984), which is the only solution procedure available for optirnally solving the mi~liinalresource availability cost problem. Moreover, the procedure proves to be less sensitive to changes in the cost parameters.
V. DISCRETE TIME/COST TRADE-OFFS IN PROJECT NETWORKS Demeulemeester et al. (1994b) have developed two optimal procedures for the discrete timelcost trade-off problem (DTCTP) in deterministic project networks of the CPM type, under a single nonrenewable resource. The specification of a project is assumed to be given in activity-on-arc (AoA) notation by a directed acyclic graph (dag) D = (N,A) in which W is the set of nodes, representing network "events", and A is the set of arcs, representing network "activities". We assume, without loss of generality, that there is a single start node l and a single terminal node n , n=INI. The duration y, of activity a E A is a discrete, nonincreasing function g,(x,) of the amount of a single resource allocated to it; i.e., y, = g,(x,). The pair y,,x, shall be referred to as a "mode", and shall be written as: y,(x,). Thus an activity that assumes four different durations according to four possible resource allocations to it shall be said to possess four modes. n s the DTCTP (see also De et Three possible objective f ~ ~ n c t i o for al. (1993)) are considered. For the first objective function (subsequently referred to as $1) we specify a limit R on the total availability of a single nonrenewable resource type. The problem is then to decim = IAl, that comde on the vector of activity durations (y,, ...,Y,~), pletes the project as early as possible under the limited availability of the single nonreilewable resource type. A second objective function (referred to as P2) reverses this problem formulation: now we specify a limit T on the project length and we try to minimize the sum of the resource usage over all activities. For the third and final objective function (referred to as P3) we have to compute the complete timelcost trade-offf~~ilction for the total projzct, i.e.. in the case of the DTCTP
all the efficient points (T,R) such that with a resource limit R a project length T can be obtained and such that no other point (T', R') exists for which both T' and R' are smaller than or equal to T and R: -
minimize [ C xlj I -t, <= T <= t, ] (ij)cA The early contributions to the basic timelcost trade-off problem in CPM networks assumed ample resource availability and tried to minilnize the total project cost subject to precedence constraints and lower and upper bound constraints on the activity durations. While the problem has been widely studied under the assun~ptionof continuous time-cost relationships (see standard texts such as Modes et al. (1983)), the literature on the DTCTP where the time-cost relationships are defined at discrete points has been rather sparse. De et al. (1993) offer an excellent review and have shown (De et al. (1992)) that any exact solution algorithm would very likely exhibit an exponential worstcase complexity. The two optimal procedures developed by Demeulemeester et al. (1994b) are based on dynamic programming logic. The first procedure (Reduction Plan 1 ) heavily relies on a network reduction scheme proposed by Bein et al. (1992) and subsequently referred to as the BKS appuonclz. Series-parallel networks can be reduced by a cascade of series-parallel reductions through the application of dynamic programming. The serial optimization goes as follows: if a = (i,j) is the unique arc into j and b = (j,k) is the unique al-cout of j then these two arcs in series are replaced by a single al-c c = (i,k). The parallel optimization process can be viewed as parallel arc reduction: two or more parallel arcs a,, ..., a, Beading from i to j are replaced by a unique arc a = (i,j). A project that can be optimized through a succession of series and parallel optimizations (reductions) is said to be seuieslpamllel reducifor short). A project which cannot be thus optimible (sip red~~cible, zed is calledsip irreducible. An efficient method for slp irreducible networks consists of enumerating the cost assignments for a limited number of activities, preferably the minimum number, such that the resulting network becomes sip reducible. We refer to this method as "optimal fixing". A "reductiorz plan" now consists of all actions that have to be performed on a network in order to reduce the network to one single arc, including the determination of which activities need to be
fixed. As soon as such a plan is coi~structedit is quite easy to obtain a solutioil to all of the objective [unctions that we proposed. The BKS approach is composed of two inajor steps: the first constructs the "complexity graph" of the given project network (easily accomplished from standard "dominator tree" arguments); and the secoild determines the minimal node cover of this complexity graph. The key element in their constructioil is that the complexity graph is directed and acyclic, whence its minimal node cover can be easily secured by a simple (i.e., polynomially bounded) "maximum flow" procedure. The BKS scheme yields the rnzrzinz~lrnnzimber. of node reductzoi~s. Such a scheme, however. may not minimize the computational effort required in reducing the network because we should consider not only r which the reducthe m ~ , ~ boef rn o ~ l ereductior~rbut also the o l - d ~ in tions are performed (oftentimes there is a choice), as well as the total number of leaves which have to be evaluated. These additional factors may have a dominant effect on the computation time, as evidenced by our experimentation. In the enumerative scheme adopted (Reductzon Plnrz 2). Demeulemeester et al. (1994b) have termed a conlplete set of resource allocations to all the "index activities" in the project a " l e c i ~They . developed a branch-and-bound procedure for genei-atiilg a reduction plan that aims at minimizing the total number of leaves evaluated, which is equivalent to minimizing the computing effort. The suggested procedures were programmed for personal computer. In the absence of a standard set of test problems for the DTCTR the procedure was extensively tested through Monte Carlo experimentation 011 a variety of networks drawn from the literature or generated specifically for this study. The results are most encouraging: for projects up to 20 nodes and 45 activities, the time required never exceeded 7 minutes. Furthermore, the authors recommend the use of Plan 2 for projects with a large number of activities and high ' r e d ~ ~ c tzon cornplexzq' ( B K S define reduction complexity as the minimum number of node reductions sufficient - along with series and parallel reductions - to reduce a two-terminal acyclic network to a single edge). It outperforms Plan 1both in terms of CPU time required and number of leaves visited. The experiments also confirmed the dominant role played by the 'reduction complexity' and suggested its use as a measure of network complexity, on which we report in Section VII.
VI. MAXIMIZING T H E NEXT PRESENT VALUE O F PROJECT NETWORKS Over the past few years, researchers became interested in exploring objective functions other than minimizing project duration. One of the popular objective functions which receives increasing attention is the maximization of the net present value (npv) of a project. The problem may be stated as follows. Given specified net cash flows (ncf's) at selected key events in the activity-on-the-arc inode of representation of a project, what is the optimal schedule of the realization times of these key events in order to maximize the net present value of the project as a whole? It should be clear that the izcf at node i, denoted by a,, Inay be positive or negative, reflecting net receipts, and net disbursements, respectively. Typically, positive and negative izcf s are interspersed, with the majority of the earlier cash flows being negative, reflecting outlays by the contractor which are not fully recovered by owner payments, and the majority of the later lzcfs being positive, reflecting the recoupment by the contractor of expenditures plus a reasonable profit. The alert reader will realize that even in the absence of resource constraints, the project duratioil which maximizes the net present value of a project may be longer than the duration of the critical path! It may even be the case that the optimal project duration (in the absence of due dates) equals infinity, indicating that it is best not to perform the project at all. Or it may be that any realizatioil times for the key events will maximize the izp11. Elmaghraby and Herroelen (1990) review the literature ancl present an optimal procedure which exploits the 'essential simplicity' of the problem. The scheduling problem reduces to the problem of either advanciilg some node realizations or retarding them while respecting the precedence relations. Indeed, if a , is positive then the realizatioil time of node i should be as small as possible, and if a, is negative then the node realization of node i should be as large as possible. Elmaghraby and Herroelen (1990) develop an optimal procedure which iteratively builds tree structures which are eventually moved forward aildior backward in time until the optimal solution is found. The procedure has been implemented by Herroelen and Gallells (1993). They report promising computational times on a problem set involving 250 randomly generated projects with up to some 20 nodes and 200 activities.
The problem of maximizing the npv of projects under additional resource and capital budgeting constraints is a popular area of research. Many optimal and suboptimal solutioil procedures have bee11 developed. Recently, Icmeli and Elengug (1993) have extended the DH-procedure developed by Demeulemeester and Herroelen (1992a) for the RCPSP to the problem of maximizing the ilpv of projects subject to resource constraints. Several theoretical issues still remain uilresolved, however. Elmagllraby and Herroelen (1990) already questioned the validity of the rlpv criterion as such. In financial theory, the npil is used as a criterion for selecting investment projects, while here, in the project scheduling context, the objective is to sclzedule projects in order to maximize the npv.In addition, it is generally assumed that the ncf a,is independeiit of the realization of the key even: I . In many contracts, however, one earns a bonus for early realization and pays a penalty for late realization of key events. The models developed in the literature lack consistency in the way they treat the rzcf s: cash flows are sometimes associated with the completion of activities, sometimes with the com~neilcementof activities. Progress payments are explicitly considered or not. Quite often, the reader is left with the impression that a number of authors coilfuse 'modelling' a problem 'just for the sake of mathematics' with the solution of real-life practical project scheduling problems. VII. NETWORK COMPLEXITY AND ITS MEASUREMENT Managers and theoreticians have a definite sensation of the varyirzg degrees of dificulty in the analysis nrzd syrzthesis of different projects. The sensation that certain project network parameters render a problem easy or more difficult to solve is felt by almost all workers in the field. Measuring this sensation, in other words measuring network complexity, is important for a number of reasons. Testing the accuracy and efficiency of optimal and suboptimal solution procedures for project scheduling problems requires the use of a set of benchmark instances which span the full range of complexity, from very easy to very hard instances. The generation of such instances now heavily depends on the possibility to isolate factors that precisely determine the computing effort required by the solution procedure used to solve the problem. Eventually, a measure of network complexity could serve as a predictor of the processing time requirements for a particular pro-
ject planning software package. This inspired a number of researchers to develop various complexity measures. The best known measure which tries to characterize the topological structure of a project network is the coejjicient of ~zetworkconzplexity (CNC) which is simply defined as the number of arcs over the number of nodes. Elrnaghraby and Herroelen (1980) already questioned the use of the CNC. The measure totally relies on the count of activities and nodes in the network. Since it is easy to construct networks of equal number of arcs and nodes but varying degrees of difficulty in analysis, they failed to see how the CNC can discriminate among them. As already mentioned in Section 5 , Bein et al. (1992) introduced a new characterization of two-terminal acyclic networks which essentially measrrres haw nearly series-para!?e! a nctwork is. They define the reduction complexity as the minimum number of node reductions sufficient (along with series and parallel reductions) to reduce a twoterminal acyclic network to a single edge. De Reyck and Rerroelen (1994) adopt the reduction complexity as their definition of the conzplexity index (CI) of an activity network in order to investigate its potential use as a measure of activity network complexity. Two problems project sched~lling were chosen for analysis: the reso~~rce-constrained problem (RCPSP) and the discrete timelcost trade-ofproblem (DTCTP). Bein et al. (1992) define co~nplexityin terms of a sequence of reductions of a st-dag. Aparnllel reduction at i,j replaces two or more arcs a,, a,,. ..a,, all joining i to j , by a single arc a = (i,j). A series reduction at i is possible when a = (i,j) is the unique arc into j and b = (j,k) is the unique arc out of j: n and b are replaced by a single arc c = (i,k). Let [D] denote the network obtained by applying to D all series-parallel arc reductions. If D = [D] then D is said to be irredzicible. Following Bein et al. (1992), we say that node j of an irreducible network is eligible for a node reduction when j has unit in-degree or out-degree, and j r: l,n. Let a = (i,j) be the unique arc into j and bl =(j,kl),..., b,=(j,k,) be the arcs out ofj. Then the reduction of node j replaces a,b,,..., b, by the arcs c,=(i,k,) ,...,c,=(i,k,). The case where j has unit out-degree is symmetric. Note that in an irreducible network any node whose only predecessor is 1 or whose only successor is n is eligible for reduction. Therefore, every network can be reduced to the single arc (1,n) by a sequence of node reductions interleaved with series and parallel reductions. The number of node reductions in such a sequence may differ. Bein et al. (1992) define the reduction
complexity of D as the minimum number of node reductions sufficient (along with series and parallel reductions) to reduce D to a single arc. More formally, let Doj denote the network obtained from reduction of node j in D. Then the reduction complexity is the smallest q for which there exists a sequence of nodes (j,,j 2,...,j,) such that [[[[Dloj,] ...l oj,] = (1,n). Such a sequence is called a reduction sequence. De Reyck and Herroelen (1994) take the length of a reduction sequence; i.e., the reduction complexity as their definition of the complexity index, CI, of D. Since all series-parallel networks have a CI-value equal to zero, the complexity index C1 seems to be a good measure 6f how close the network is to being series-parallel. Bein et al. (1992) prove that the C1 of a network D is equal to the cardinality of a minimum node cover in its complexity graph, C(D). The complexilypph, C(L)),of a network D = (N,A_)is defined as follows: (i,j) E C(D), i.e., (i,j) is an arc of C(D), if there exists paths n: (l,j),n:(i,n), n:,(i,j) and n:,(i,j) such that n:(l,j)nn,(i,j) = [jl and n(i,n)nn,(i,j) = {i].Note that the paths n,(i,j) and n,(i,j) may be the same. The definition implies that neither 1nor n appears as a node in C(D). The same authors have developed an algorithm for constructing the complexity graph in polynomial time. Since C(D) is transitive and acyclic, a minimum node cover can be found in polynomial time by reducing the minimum node cover problem in C(D) to the problem of finding a maximum matching in a bipartite graph. The results obtained by De Reyck and Herroelen (1994) show a negative correlation between the average CPU time required for solving 5000 RCPSP instances using the DH-procedure and the CNC. The negative correlation effect is caused by the variation in the C1 and the positive correlation between the C1 and the CNC. This result is quite interesting, since it shows that it is veiy ambiguous to attach all explanatory power to the CNC. Regression analysis confirmed the negative correlation between the Cl and the hardness of the RCPSP. The CNC explains nothing extra above what is already explained by the Cl. Using Reduction Plan 1 mentioned earlier, a total of 250 DTCTP instances were solved to optimality. Results indicate that the number of activity execution modes and the C1 have a strong effect on the processing time needed. The portion of the variability which could be explained by both factors was 95%. De Reyck and Herroelen (1994) are confident that the notion of complexity expressed by the Cl, lies at the very heart of the DTCTP itself.
Keeping the C1 constant, De Reyck and Herroelen (1994) also tried to investigate the impact of resource availability on the resuired solution effort to solve the RCPSP. Results on 4500 networks confirm the bell-shaped curve conjecture made by Elmaghraby and Herroelen (1980). If resources are available in small amounts, there will be relatively little freedom in scheduling the activities (for instance, the activities may have to be placed in series and the resulting project duration will equal the sum of activity durations). Hence, the corresponding RCPSP should be quite easy to solve. If, on the other hand, resources are ainply available, the activities can simply be scheduled in parallel and the resulting project duration will be equal to the critical path length. Hence, the required computational effort should be small. VLII. CONCLUSION The objective of this paper was to provide a guided tour through what we believe to be important recent developments in the area of resource-constrained project scheduling. The colnmercial project planning software packages currently available on the market do excel in offering the user resource planning features allowing him to obtain reports on resource usage in specific time intervals, resource profile charts and cumulative resource charts, but do not exploit the power of the recent developments in the area of resource-constrained scheduling. This does not come as a surprise. Complexity theory indicates that most resource-constrained project scheduling problems are NPhard. This explains and justifies the use of branch-and-bound as the preferred optimal solution strategy. The resource-constrained project scheduling problem involves the scheduling of project network activities subject to precedence and resource constraints under the objective of minimizing the project duration. Recently, a branch-and-bound procedure (Demeulemeester and Herroelen (1992a)) has been developed for optimally solving thc basic RCPSP, where the precedence relations are of the finish-start type with zero time lag, and the limit on the availability of renewable resources remains constant over time. Over the past few years, this DH-p~ocedrlrehas been used by a number of international research teains on a wide variety of problem instances. Computatioi~alresults are very encouraging and confirm the code to be superior to other optimal procedures. A few seconds are needed on a personal computer for optimally solving projects of some 30-40 activities and 3 resour-
ces. We conjecture that optimization of large scale projects can be achieved with this procedure through decomposition of the project into smaller units that are loosely connected. The DH-procedure has been successfully extended to more realistic problem settings such as other than finish-start constraints, activity ready times and due dates, activity preemption, and nonconstant resource availabilities. In addition, the procedure has been used for the generation of feasible finite capacity schedules under various scenarios with respect to both the number and size of the process and transfer batches used. In project planning practice, determining the cheapest resource availability amounts for which a feasible schedule exists that does not violate a given project due date, is an important issue. Again, the DHprocedure and its extensions, can be put to use to generate optimal solutions for this type of resource availability cost minimization problem. When multiple modes are available for executing project activities, timelcost trade-off problems arise in that the best execution modes must be determined subject to a budget constraint. Demeulemeester et al. (1994) have developed two optimal procedures for solving this discrete timelcost trade-off problem under three different objectives of analysis. The first procedure is based on the notion of network reduction and the algorithm developed by Bein et al. (1992) for finding the minimum number of node reductions necessary to transforin a general network to a series-parallel network. The second algorithm minimizes the computational effort in enumerating alternative modes through a branch-and-bound search tree. Computational results are encouraging: projects with up to 40-50 activities can be optimally solved on a personal computer in a few minutes. Minimizing project duration is not the only interesting project scheduling objective. An active area of research i\ the development of project scheduling procedures which aim at maximizing the net present value of projects. In the absence of resource constraints. Elmaghraby and Herroelen (1990) and Herroelen and Gallens (1993) have exploited the 'essential simplicity' of the problem to derive an efficient optimal solution procedure which yields optimal results in very encouraging computation times. The DH-procedure, originally developed for the RCPSP under the makespan objective, has been successfully extended to the npv-case. Many conceptual issues concerning the use of the npv-criterion, however, still await to be resolved.
Finally, progress has been made in the search for reliable measures of network complexity. De Reyck and Herroelen (1994) have performed a number of experiments which confirm that the so-called complexity index (defined as the minimum number of node reductions which are sufficient for reducing an acyclic two-terminal activity-on-the-arc network to a single arc) may be used as a measure of complexity for both the RCPSP and the DTCTP. The higher the CI, the easier to solve an RCPSP and the more difficult it is to solve an instance of the DTCTP. Another consequence of the knowledge that most resource-constrained project scheduling problems are NP-hard, is the strong interest in the development of heuristic searchprocedures. Heuristic search procedures seek for an acceptable solution attainable at an affordable computations! cost, withoi~tguaranteeing that the optimlum has been achieved. Most recently developed procedures are spin-offs of local neighbourhood search techniques (Reeves (1993)). They appear under fancy names such as simulated annealing, tabu search, genetic algorithms, constraint programming. Essentailly, a neighbourhood of a solution is a set of solutions that can be reached from the solution through a number of very simple operations such as the removal, addition, or exchange of objects. Starting from a suboptimal solution, a given neighbourhood is searched for improvement (for example by exchanging activities in the schedule). When a better solution is found, the procedure is re-started from the better solution and continued until improvement no longer occurs. It appears that it pays to resume the search process from a solution which is worse than the previous one obtained. As such one hopes to escape from a local optimum and find better solutions. Simzllated annealing stems from the basic idea of simulating the cooling process of material in a heat bath on a computer. It took us some thirty years to realize that this simulation process could be used in searching acceptable solutions for scheduling problems. Uphill moves are allowed, but their frequence is guided by a probability distribution which changes during the search. Tabu search stems from methods to escape from local optimality zones through the process of adding and removing additional constraints in order to allow the search of otherwise forbidden zones in the search space. These constraints take different forms such as declaring certain exchanges tabu for a number of iterations, or accepting a worse solution as a new starting point. Genetic algorithms find their origin in the analogy between the
representation of a complex structure by means of a vector of components and the genetic structure of a chromosome. By genetic manipulation of plants and animals, offsprings are sought which have certain desirable characteristics which are determined by the way in which the parent chromosomes are combined. In a similar manner, genetic algorithms keep a population of chromoso~neswhich are a coded representation of scheduling solutions. These are then manipulated via genetic operators such as crossovers and mutations. Krlowledge based schedrllers which use the technique of corzsti-aintpi-ogr-cllnmingare entering the market. The treatment of these new neighbourhood search techniques in the context of resource-constrained project scheduling is still in its infancy. As reported by Pinsoil (19941, promising results have been obtained with tabu sezrch for example, while the resu!ts with constraint programming techniques are not equally encouraging. Further research in this area should make it possible to move from a 'problem formulation' strategy in the direction of a 'problem solving' strategy. In conclusion, the recent developments in the area of resourceconstrained project scheduling are promising. Progress has been made in optimally solving hard project scheduling problems, and in understanding the reasons why so many scheduling problems in practice prove to be very hard nuts to crack. Continuing research efforts should make it possible to close the gap between scheduling theory and practice. Worliablie finite scheduling software which exploits the intelligence of optimal and suboptimal procedures developed in the literature inay be the result. REFERENCES Assad, A.A. and E.A. Wasil, 1986, Project Management Using a Microcomputer. Conz11z~t e a nrzd Opei.niior7sResearch 13, 231-260. Baker, K.R. (1974), Introduction to Sequencing and Scheduling, (Wiley). Bein. W.W.. J. Kamburowski and M.F.M. Stallmann, 1992, Optimal Reduction of Two-Tel.mina1 Directed Acyclic Graphs, Sinm .Iourrznl of Cornputir~g21, 6 , 1112-1129. Belln~ann,R., A.O. Esogbue and I. Nabeshima. 1982, Mathematical Aspects of Scheduling and Applications, (Pergamon Press). Blazewicz, J.. J.K. Lenstra and A.H.G. Rinnooy Kan, 1983, Scheduling Projects to Resource Constraints: classification and Complexity, Discrete Applied Mnthemniics 5 , 11",
14.
Conway. R.W., W.L. Maxwell and L.W. Miller. 1967, Theory of Scheduling, (Addison-Wesley Publishing Company). De, P,, E.J. Dunne, J.B. Gosh and C.E. Wells, 1992, Complexity of the Discrete Time-Cost Tradeoff Problem for Project Networks, Tech. Report, (Dept. MIS and Dec. Sci., University of Dayton, Dayton, OH).
De. P.. E.J. Dunne, J.B. Gosh and C.E. Wells, 1993, The Discrete Time-Cost Tradeoff Problem Revisited. Working Paper 93-04, (Dept. MIS and Dec. Sci.. University of Daytoil, Dayton. OH). Demeulemeester, E., 1992, Optimal Algorithms for Various Classes of Multiple ResourceConstrained Project Scheduling Problems, Ph.D. Thesis, (Department of Applied Economic Sciences. Icatholieke Universiteit Leuven). Demeulemeester. E., 1994, Minimizing Resource Availability Costs in Time-Limited Project Networks, 12faiziigenzeiztScience. to appear. Demeulemcester, E , and W. Herroelen, 1992a. A Branch-and-Bound Procedure for the Multiple Resource-Constrained Project Scheduling Problem, Miri~irgenzc~nt Science 38, 12. 1803-1818. Den~eulen~eester, E . and W. Herroelen; 199%; A Branch-and-Bound Procedure for the Gcneralized Resource-Constrailled Project Scheduling Problem, Ol~eizrtioiisResearc~lr,to appear. Demeulemeester. E. and W. Herroelen, 1992c, A11 Efficient Optimal Solution Procedure for the Preemptive Resource-Constrained Project Scheduling Problem, Research Repoi-t no 9216. (Department of Applied Economic Sciences, Katholieke Universiteit Leuven. Belgium). Demeulemeester, E., W. Herroelen, W.P. Sinipson. S. Baroum. J.H. Patterson and K.-K. Ymg, 1994a, On a Paper by Christofides et al. for Solving the Multiple-Resource Constrained Single Project Schedulin3 Problem. Eziropeaiz .Iorrnzal of Opeizllioirnl Rrsenrch 7 6 , 1, 218-228. Demeulemeester, E.L., W.S. Herroelen and S.E. Elmaghraby. 1994b, Optinlal Procedures for the Discrete TimeICost Trade-Off Problein in Project Networks, Europenir Journal of Ol~eratiorznlResearch, to appear. Demeulemeester. E.L. and W.S. Herroelen, 1994, Modelling Setup Times, Process Batches ofO17erational Reand Transfer Batches Using Activity Networlc Logic, Errro/~ean.Jo~1r1zal senicli, to appear. De Reyck. B. and W. Herroelen, 1993, On the Use of the Complexity Index as a Measure of Complexity in Activity Networks, Reseal-ch Report N" 9332, (Department of Applied Economic Sciences. Katholieke Universiteit Leuven). De Wit: J. and W. Herroelen, 1990, An Evaluation of Microcomputer-based Software Packages for Project Management, Eriropen~zJo~criznlof Operatiolznl Research 49. 102-139. Edwards, K. et al., 1984, Project Management with the PC: Part I and Part 11, PC Mugazii7e 3 , 21. 109-156 and ( 2 4 ) . 193-277. Elmaghraby, S.E. and W.S. Herroelen, 1980, On the Measurement of Complexity in Activity Networks, Ezrrol~eanJozrrnal of Operational Resenrclr 5 , 223-234. Elmaghraby, S.E. and W.S. Herroelen, 1990, The Scheduling Activities to Maximize the Net Present Value of Projects, European Joiii.rzn1 of Operaiioizal Reseirlz./~49: 1, 35-49. French, S . , 1982. Seq~lencingand Scheduling - An Introduction to thc Mathematics of the Job-Shop, (Wilep). A-esko-Weiss,H., 1989. High-End Project Managers Make the Plans, PCM~rgiizine8.9,155178. Herroelen; W.S.. 1972, Resource-Constrailled Project Scheduling - The Statc of the Art, Opei.nriolzn1 Research Qziarter.ly 23, 261 -275. Herroelen. W.S., 1991. Operationele produktieplanning, (Acco, Leuven). Herroelen, W.S. and E. Gallens, 1993, Computatioilal Experience with an Optimal Procedure for the Scheduling of Activities to Maximize the Net Present Value of Projects. EL!rol~eniiJorrrlinl of Oper?rrioi~alResen~.clz65, 274-277. Herroelen, W.S. and E.L. Demeulemeester, 1992, Recent Advances in Branch-and-Bound Procedures for Resource-Constrained Project Scheduling Problems, fiocerdiligs Srlnznzrr Sclzool on Schedrrling Theoly a i ~ dIts Apl~licotioils,Chiteau de Bonas, France. September 28 - October 2: 1992. (Wiley). Hogan, et al., 1985, Project Planning Programs Put to the Test, Rusilzess Software 3 , 3 . 21-56.
Icmeli; 0 . and S. E r e n g u ~ 1993, , A Branch-and-Bound Procedure for the Resource-Constrained Project Scheduling Problem with Discounted Cash Flows. Research paper, (Cleveland State University, Cleveland, Ohio). Lawler, E.L., J.K. Lenstra, A.H.G. Rinnooy Kan and D.B. Shmoys, 1993. Sequencing and Scheduling: Algorithms and Complexity, in Graves et al., 1993, Handboolts in OR and MS - Vol. 1 : Logistics of Production and Inventory. (North-Holland). Maroto, C. and P. Tormos. 1994, Project Management: An Evaluation of Software Quality, Ii~tenlatiorinlFrrilsnctio?zsill Ope~.ntio~~s. Resenick l . 2, 209-221. Moder, J.J.; C.R. Phillips and E.W. Davis, 1983. Project Management with CPM, PERT and Precedence Diagramming, (Van Nostrand Reinhold). Mohring, R.H., 1984. Minimizing Costs of Resource Requirements in Project Networks Subject to a Fixed Completion Time. Operatioizs Resenich 32. 1, 89-120. Morton, T1i.E. and D.W. Pentico, 1993. Heuristic Scheduling Systems - With Applications to Productio~iSystems and Project Management, (Wiley Interscience). Patterson, J.. 1981. A Comparison of Exact Procedures for Solving the Multiple Constrained Resource Project Scheduling Problem, Mnr~ageiizeiztScience 30, 7, 854-867. Pinson, E., 1994. MCrnoire d'habilitation ? diriger i des recherches, (Universite Pierre et Marie Curie (Paris VI)). Reeves. C.. ed., 1993. Modern Heuristic Tech!iiques for Combinatorial Problems, (R!ack.vel! Scientific Publications). Rinnooy Kan, A.H.G., 1976, Machine Scheduling Problems - Classification, Complexity and Computatio~~s, (Martinus Nijhoff). Shmoys, D.B. and E. Tardos, 1993, Conlputational Complexity of Combinatorial Problems, in Graham, R.L., M. Griitschel and L. Lovasz, eds., Handbook in Combinatorics, (NorthHolland, Amsterdam). Stinson, J.P., E.W. Davis and B.M. Khumawala. 1978. Multiple Resource-Constrained Scheduling Using Branch-and-Bound,AIIE Eansiictiol~s10, 3.252-259.
Tijdschrift voor Economie e n Management Vol. XXXIX, 4,1994
Queueing Theory and Operations Management by M. LAMBRECHT" and N. VANDAELE"
Waiting is an intimate dimension of our daily lives. Everyone has experienced waiting in line at the supermarket, the bank and any number of other places. We constantly observe traffic, hospital or court congestion, customers or machines are waiting and we experience waiting times for almost every service offered. These waiting-line situations are also called queueing problems. The common characteristic is that a number of physical entities (the arrivals) are attempting to receive service from limited facilities (the servers) and as a consequence the arrivals must sometimes wait in line for their turn to be served. Numerous applications are described and the mathematics of queueing has advanced tremendously over the last 40 years. The objective of this paper is to focus on operations management applications of queueing theory. The first textbook on the subject: "Queues, Inventories and Maintenance" was written in 1958 by Morse. A tremendous number of queueing problems occur in production and inventory management. Think of the design of facility layouts, staffing decisions, maintenance problems, the physical capacity problem, lead time estimation and lot sizing decisions to mention only a few. Over the last decade Just-In-Time (JIT), Time Based Competition and the Fast Cycle Time strategies gave rise to a renewed interest in queueing. Indeed, a Fast Cycle Time strategy is basically dealing with time, with reduced waiting times and an emphasis on a fast Time-to''Department of Applied Economic Sciences, Operations Management Group,K.U. Leuven. This research was supported by NFWOJFKFO Belgium project 2.0053.93.
415
Market. It is amazing to realize that with a little understanding of how queues behave, the solution to many operations management problems becomes clear if not obvious. The paper is organized as follows. We select three major problem areas in operations management: the inventory-capacity trade-off, the impact of uncertainty (disruptions, variability) and capacity utilization on lead time and the impact of lot-sizing on lead times. We show how insights from queueing theory may be helpful to better manage these issues. It is tempting to treat the subject mathematically, but we opt in this article for a more qualitative approach. The enthusiastic reader however may not underestimate the mathematical intricacies involved. II. INSIGHTS FRGPS, QUEUEING THEORY A. The Capacity-Inventory Eade-Off In order to better understand the capacity-inventory trade-off, it is important to understand the nature of the Just-In-Time (JIT) revolution. The JIT Revolution can be summarized as follows (Zangwill(1992)): "The old viewpoint: Increase inventory, hold a lot in stock, and then you are ready for anything. The new viewpoint: Reduce inventory, cut the production lead time and you call respond fast to anything. These are two opposing views about being responsive to the customer". In the first case companies satisfy customer orders from stock, which is an immediate response. In a JIT environment companies satisfy customer demailds with a certain time delay, which of course is kept as small as possible. We view the company more as a queueing system instead of an inventory system. Behind this new viewpoint focussing on a fast response, there is a synergetic chain of manufacturing changes that implemelltation depends on the goes several layers deep. A successf~~l ability to eliminate all forms of waste, continuous improvement, employee involvement, disciplined implementation, supplier participation, reorganization of the production floor, modular designs, cell layouts, process colltrol and total quality creation. The objective is to improve productivity. Moreover, through this fast response to specific customer needs, it is hoped that it results in an enhanced market power. The improved productivity and the stronger market position are supposed to be the basis for a sustainable competitive advantage.
The question now is: how can we guarantee a fast response without the protection of inventory as JIT asks us to do? In order to answer this question, let's turn to a basic insight from queueing theory. It is well known that companies that try to operate with tight capacity are forced to carry substantial inventories to protect against unexpected surges in demand and other contingencies (Zipkin (1991)). High levels of capacity utilization cause increased congestion, longer lead times and higher inventories due to uncertainty. So if a company wants to reduce lead times or lower the inventories then it is advisable to have excess capacity. That's the inventorycapacity trade-off. We quote from Zipkin (1991), "Indeed, companies often find that JIT means buying more and better equipment - a serious commitment of capital resources". In today's manufacturing environment companies are stressing duedate performance, time (cycle time, response time, time-to-market) and reduced inventory levels as primary measures of shop performance. In order to achieve this, companies seek to add capacity cushions in an attempt to become more respoilsive to customer demands (instead of inventory buffers). This of course is contrary to the traditional performance measure of resource efficiency (high levels of machine utilization). The core problem is the evaluation of the benefits associated with lower inventories versus the lower efficiencies associated with excess capacity. The question is whether a company is better off by replacing inventory by capacity,.. or by keeping the machine assets tight and accepting more inventory. In order to have some empirical evidence of this phenomenon we analyzed the inventory position and the capital investments in the Belgian metal working industry in the period 1977-1991. Over this period the inventory position measured as work-in-process and finished product inventory relative to value added dropped from 50% to 31%. The investments in material fixed assets relative to value added increased from 32 % to 42 %. Another interesting observation is the following. In the period 1977-1991 total sales in the Belgian industry increased roughly by 300 % (including inflation and taking 1977 as the reference year). Over the same period depreciation charges increased by 420 %. The decrease in inventory is of course not only attributable to the capacity expansion. A period of economic growth e.g. is always associated with a period of inventory depletion. It is also known that in-
vestments in automation and flexible equipment are larger than the required investments for conventional machinery. The positive side of the coin is that the increased capital intensity positively contribute to the employee's productivity and that reductions in inventory also help to improve worker productivity. How can reducing inventories improve productivity? One possible mechanism is the dynamic learning process, inventory reduction helps to achieve a higher learning rate through a clearer exposition and easier identification of problems. (Kim (1993)). There is however also a major drawback associated with the above-mentioned redistribution phenomenon. The question is what happens to companies that heavily invested in plant and equipment and that are confronted with a period of economic recession? The drop in demand, the entrance of many new competitors; and the heavy investment boom created in many industries huge overcapacities, prices dropped, profits disappeared. We again experience a period of intensified price competition (cost cutting programs). Don't forget that one of the premises of the JIT, Time-based philosophy was the prospect of achieving competitive advantage, higher margins (premiums) and more attractive profits. Now it turns out that excess capacity is an element of rigidity, a source of additional riskiness that may result in more variability of performance. Is there a solution to this problem? Let us therefore go back to queueing theory. There we learn that variability and uncertainty are the key parameters. The more uncertainty, the more damaging high levels of machine utilizations are on inventories and lead time. We expect in other words lower levels of capacity utilization (more excess capacity) in job-shop manufacturing (e.g. machine building) compared to the more standardized manufacturing environments (e.g. consumer electronics). The argument is that the greater the uncertainty (e.g. in the receipt of customer orders), the higher the negative impact of increased congestion on inventories and lead times. We indeed observe a 10 % point difference in average utilization between the industrial product sector (72 %) and the consumer product sector (82 %) of the Belgian metalworking industry (period 1981-1992). Every effort to reduce the level of variability (process control, zero defects, better supplier relationships, better forecasting, ...) will automatically have a positive impact on inventories. The process of continuous improvement is one of the only ways out to escape from the inventory-capacity conflict, which is basically a conflict between flexi-
bility (responsiveness) and efficiency. Every inventory reduction program shouid be backed up by efforts of continuous improvement and better capacity management. So the key to the solution is fighting disruptions caused by process instability and all sorts of unreliabilities. Disruptions lead to unnecessary high capital costs. Fighting disruptions is a learning process offering a clear target for human resources management. Ultimately 'people' implement strategies. Participative management combined with self-directed teams emphasizing joint problem solving and team work, total productive maintenance based on responsibility at the source are all means to achieve the objective. 111the next paragraph we analyze in greater detail the impact of uncertainty and capacity utilization on lead times.
B. Impact of Disluptiorzs and Capacity Utilization orz Lead Time The fast cycle strategy and the associated crusade for lower inventories are based on the best known relationship of queueing theory: Little's Law. For simplicity, assume a single server queueiilg model with an arrival and processing rate of 1a n d p customers per time unit. Under steady state conditions, Little's Law combines the two most important operations management performance measures into one formula: the average number of customers in the system, E(L) (equivalent to the average inventory) and the average time units spend in the system, E(W) (equivalent to average lead time). Little's Law: E(E) = L.E(W) Little's Law which is quite general and applies to any queue discipline specifies how inventory and time in the system are linked. A system containing a lot of inventory inevitably results in long lead times or, conversely reduce inventory and respond fast. The lead time, defined as the total elapsed time from order arrival until the order is finished and the customer is served, consists of two important parts: the waiting time and the processing time. The latter is mostly a fairly stable component of lead time. The average waiting time however is highly sensitive to system conditions such as the level of uncertainty and the capacity utilization. Utilization (p) is defined as the ratio of input rate to processing rate:
Based on this, one can quantify quite easily the impact of uncertainty and utilization on average lead time. In general one can state that higher utilizations andlor higher levels of uncertainty cause longer waiting times and consequently loager lead time and higher levels of inventory. This in turn induces strategies to improve performance. One possibility e.g. is to coilsider capacity expansion (see paragraph A) another is to reduce the uncertainty in the system by eliminating all disruptions. This can be accomplished by automation, a better trained work force, standardisation of processes, more design efforts, improved mainteilailce practices, quality improvements or in general all efforts related to continuous improve~ Prodiiction System ment (I
approach the protection is based on a safety time. The safety time can be quantified by means of a multiplier. The question is by what factor do we have to multiply the average lead time so that a quoted lead time is met X % of the time. Traditional inventory theory is mainly concerned with fixing order quantities and safety stocks. The new approach is concerned with quoting reliable lead times and consequently requires a safety time protection. In many cases the issue is not to quote a lead time but to satisfy a market imposed lead time. It is clear that more safety time will be needed the larger the variability of the lead time. Moreover, the level of capacity utilization is also very important. Higher levels of utilization cause higher lead time variances and service levels will deteriorate. The congestion phenomenon (utilization and uncertainty) is again the key to any lead time reduction program. See Lamhrecht, Chaoxiang and Vandaele (1994) for a more detailed discussion.
C . Lot Sizing and Lend Times Another key variable that impacts the lead time is the lot sizing decision. The lot sizing decision is probably the most intensively researched issue in operations management. The traditional approach focuses on balancing ordering costs and inventory holding costs. Since the advent of time-based strategies attention was turned to analyzing the impact of lot sizing on lead time. Traditionally the lead time was held constant, the objective now is to replace the deterministically assumed lead time by a stochastic lead time as a function of the lot size, uncertainty, capacity utilization and other parameters. The determination of this stochastic lead time is based on queueing theory and has been analyzed by Banker et al. (1988), Williams (1984), Zipkill (1986) and Wein ((1990), (1992)). Amazingly enough this relationship has been misinterpreted by many researchers and practicioneers. The reasoning goes as follows: Large lot sizes will lengthen the lead time and small lot sizes will automatically result in short lead times. This is wrong. Queueing theory will keep us on the right path. The rationale goes as follows: for a given setup time, some portion of the available time at a production facility will be spend on performing setups. Total setup time depends of course on the lot size. A small lot size results in a larger proportion of setup time and the capacity utilization of the production facility will increase. So, by manupilating the lot size the capacity utilization can
be changed, and we know from the previous sections that utilization impacts the lead time. At this point it call be shown that two phenomena are present in the lot sizing decision: a batching effect and a congestion (saturation) effect. A large batch will cause a long lead time (batching effect), but on the other hand very small batches will increase the capacity utilization (the setup time portion), congestioil starts and consequently lead times will go up again. Both phenomena result in a convex relationship between lot size and average lead time. The conclusion is that both large and small lot sizes cause long average lead times. Analogous to the previous section it can be shown that the variance of the lead time is also a convex function of the lot size. Consequently, custon~erservice will deteriorate both for very small or large lot sizes. It is interesting to note that exactly the same coilclusion is reached in the traditional cost based approach, balancing holding costs and setup costs. In the queueing approach, we balance the batching and the congestion effect. Both approaches will however not result in the same optimal lot sizes. The full benefits of reduced batch sizes can only be obtained by reducing the level of uncertainty (disruptions), by maintaining a reasonable level of excess capacity or by reducing setup-times. The very popular setup-time reduction programs perfectly fit in this approach, it is an excellent way to realize continuous flow production, short lead times and high service levels. For more details see Karmarkar (1987). One of the recent developments in computer communication systems such as computer networks opened new perspectives for Iot sizing models. A common mode of operation for computer networks is e.g. polling. A polling model is a queueiilg model composed of a set of queues and a single server who visits the queues in a predetel-mined order. The data transfer froin the terminals to the computer is controlled via a polling scheme in which the computer "polls" the terminals, requesting the data, one terminal at a time (Westrate (1992)). In such a situation it is important to know how long the computer serves the same terminal. The analogy with a lot sizing problem is obvious. III. CONCLUSION Most manufacturing operations are stochastic because of uncertainty in the timing of customer orders or the receipt of purchased mate-
rial and because of variability in the processing and set-up times caused by various disruptions. All this increases congestion and consequently inflates lead times and creates excess inventories. In a timebased production environment that's exactly what we want to avoid. So the basic question is how to handle congestion, how to take advantage of the trade-offs between various performance measures such as work-in-process, lead-times and investment in capacity. Insights from queueing theory are of great help here. A first strategy is to install some capacity in excess of expected demand. Indeed, capacity can be used to buffer the system against unexpected events (instead of the standard inventory buffers). This strategy is somewhat contrary to the traditional performance measure of resource efficiency. That's probably the reason why many companies are reluctant to have large amounts of standby capxitji, after a!, a large part of the Belgian industrial sector is highly focused on scale intensive activities (VEV, 1994). Instead of focusing on excess capacity it may be advisable to concentrate on a flexible use of the existing capacity (flexible working time schemes). This in turn offers a new incentive for increasing the use of flexible labor, both in terms of the number of people employed (numerical flexibility) and in terms of the mobility of employees to undertake a range of tasks (functional flexibility). A second strategy is to focus on uncertainty and variability reducing programs. Indeed, the most damaging factor in the pursue of a fast cycle strategy is the existence of all sorts of disruptions. Disruptions lead to congestion, it lowers the speed and it leads to high capital costs and inefficiencies all over. Process stability and reliability are obtained by quality and maintenance improving programs, by better designs and most importantly by installing a problem-solving attitude of all those involved in manufacturing. This is probably best obtained by focussing on small group activities in which learning and knowledge accumulation can result in an enhanced human competence and organizational commitment. REFERENCES Banker, R., Datar, S. and Kekre. S., 1988, Relevant Costs, Congestion and Stochasticity in nrzd Ecoiionzics, 10, 171-197. Production Environments, .Toz~nlnlofAccour~rii~g Icarmarkar, U,, 1987; Lot Sizes. Lead Times and In-Process Inventories. Mnnugemer~lSciem ce, 33(3). 409-423. Kim, T , 1993, Reducing Inventory and Improving Productivity: Evidence from the PIMS Data, (Working paper University of California. San Diego).
Lamhrecht, M,, Shaoxiailg Chen and Vandaele, N., 1994, A Lot Sizing Model with Queueiilg Delays: The Issue of Safety Time, (Onderzoeksrapport 9402. Departelnent Toegepaste Ecollomische Wetenschappen, K.U. Leuven). Meyer, C., 1993, Fast Cycle Time, (The Free Press). Morse, P, 1958, Queues, Inventories and Maintenance (John Wiley). Vlaams Ekoilomisch Verbond, 1994, Op zoek naar Groei: Het Strategisch Plan voor Vlaanderen, (Uitgeverij Pelckmans). Wein, L., 1990, Scheduli~igNetworks of Queues: Heavy Traffic Analysis of a Two-Station Network with Controllable Inputs, Operations Research, 38(6), 1065-1078. Wein, L., 1992, Dynamic Scheduling of a Multiclass Make-to-Stock Queue, Ope~ationsResearch, 40(4), 724-735. Westrate, J., 1992; Allalysis and Optimization of Polling Models, (Doctoral Dissertation, Katholieke Ulliversiteit Brabant). Whitt, W., 1983, The Queueing Network Analyzer, The Bell Systen? Techizical Jorirnal 62, 2779-2815. Williams, T., 1984, Special Products and Uncertainty ill Production/Invei~torySystems, E Z I ropenn Journal of Ope~niionalResearch 15. 46-54. Zan,pill, W., 1992, The Limits of Japanese Production Theory, Interfaces 22, 14-25. Zipkin, P., 1986, Models for Design and Control of Stochastic, Multi-item Batch Production Systems, Operatiolzs Research 34(1), 91-104. Zipkin: F'., 1991, Does Manufacturing need a JIT-Revolution?, Han:arrl Bzlsiness Review, Jan-Feb., 40-50.
ERVUURSEVEST 3000 LEUVEN Res. Tel. 29 10 l 0 - Fax 29 10 Telex 27150
ONS CONTACT CEEFT ENERCIE. Tientallen keren per dag hebben w i j contact met elkaar. Eike keer u het iicht aansteekt, de televisie of de verwarming aanzet. Elke keer u elektriciteit of aardgas nodig hebt, voor u w beroep of u w onderneming. Tientallen keren per dag ervaart u hoe onze 17000 medewerkers steeds klaar staan. Orn op elk moment van de dag of nacht de energie te leveren waarop u rekent.
\'a11 alle Rekacrc prodiilttcn rl!n J e h a ~ ~ d e l s ~ r a l i ~eii ~ k meer t e ~ ~ spec! , hek iic airaireimgen, het ~ucerir~chtha~ir en dus het heat gcken~lhi! eel1 hreed p ~ ~ l ; l ~ eBek,iert b. ntraste1111ge11 rljil 111ooi c11 duurzaam, md,ir in tie a l l c ~ e r ~ splaatc t e bcscherinc~~ .in b c v e i l ~ ~ czcn dc ~ ~ i c n \ ccnu hun directc i.rnrevlil8. Bekacrt sidaldtadd
IS
illet ,tee& :o oFiallcnli aani\ezlg coali
~ t r . i \ t c r ~ n g eHet ~ ~ .1s llleestal verharpen, m a r tiicli van \itaal heiallg. :oala s [ a , i l l ~ ~ >raor r ~ i dc v e r s t c ~ k i nr~a n raLl~aalhanLiel~, stanlvezcltilters 111
111 alrhqa, aapenmg van p ~ l l ~ i e ~ L i ~111 n g:ce, ~ n gebrilil, 111 co111puter~of 111 lcr~t~~clhe tiltr~ir~c~mccssm ... Telkens v111di men iI.rarln een sr,~altjeterug
van Bt.kaeir\ lel~lcr~cl1ap111 i i i i ~ l i t c ~en i t e c h n ~ l n ~ i cI)e . 16.3211 il~c~Ieivi.rke~s r a n ~ l eBcb,icrtFroep u1t mecr
JcillL j1 pr~>liiikt~ecciltra 111
19 \atlL\eli en (lit de vele ie~koi~pltaiitor~n. ~ c r e 1 1111 dc nercid vdn ~t,iald~da~l iir loon ann en stnnn huig vooi Itiialitelt en betr~ii\vbda~.lie~il.
0
0, -.
W E
K N O W
H O W
N.V Bekaeit S A , Groepsdrectie, President Kennedypark 18 8-8500KorirljK
Tijdschrift voor Economie en Management Vol. XXXIX, 4,1994
Statistical Faillure Previsisn Problems by Y. DIRICKX" AND G. VAN LANDEGHEM"'
I. INTRODUCTION In paragraph I1 the "prediction of failure" problem is described in detail based on a system's approach. Taking into account the specifics of the failure prevision model, the paper continues with a discussion of the relevance of statistical classification methods and a brief presentation on statistical classification. Paragraph IV makes the link between failure prevision and accountancy (financial statement analysis). The article continues with a review of failure prevision research in Belgium (mainly the work of Ooghe et al. at the University of Ghent). The dynamics underlying the failure prevision process are analyzed in the fifth paragraph. In a last paragraph future research directions are explored relating to the quality of the underlying data and the validation of failure prevision rules. A major conclusion of the work is, in fact, that advances in the development of failure prevision rules have to be based on a better statistical validation. 11. FAILURE PREVISION PROBLEMS
The prediction of failure (bankruptcy or any other kind of distress) of companies is a problem of applied economics for which (according to Foster (1986)) a suitable economic theory does not exist. Consequently, attempts have been made to apply statistical techniques - often classification techniques - to help out. Despite the difficulties in operationalizing the statistical techniques for this problem, statistical failure prevision rules seem to have found some acceptance by
" Department of Applied Eco~lomicS c i e i i ~\, ~K.U.Leuven
practitioners (cfr. Altman, Haldeman and Narayanan (19771, Bossier (1992), Labro (1992), Vlaamse Commissie voor Preventief Bedrijfsbeleid (199211, for example in corporate lending decisions by banks. An introduction to failure prevision methods can be found in chapter 15 of Foster (1986) or chapter 9 in Rees (1990). Some sources of recent articles on the subject are: Omega Interriational Journal of Management Science, Jourlial of Business Finance and Accounting, Cahiers Economiques de Bruxelles, Journal of Banking and Finance, Bedrijfskunde, and Accountancy en Bedrijfskunde (Mwartaalschrift).
A. Tlze prediction of faihwe .
. .
Failfire prevision rules are designed to predict, for any lndlvldual company in a population, whether it will 'fail' within a given period. The '3 years before failure' function in Table 3 of Skogsvik ((1990),g.145) is a typical example of such a failure prevision rule. The input of that rule consists of the following data about the company: -
interest expense / all liabilities and deferred taxes income taxes / profit before taxes inventory / revenues cash / current liabilities owners' equity 1 all assets.
The value of a function of these financial statement numbers serves as an estimate of the probability of failure within 3 years. The estimated probability is compared with a threshold value (Sltogsvik (1990),p.148) in order to obtain the prediction (the company will 1will not fail within three years).
Figure 1is a schematic representation of the prediction of failure. It will bc helpful in the formulation of soinc features of the failurc previson problem. The five ratios in the example from Skogsvik (1990) are represented in Figure 1 by the box labeled 'input'. The branch of research which is the subject of this text has mainly relied onfinarzcinl data and more precisely on annual account data to predict corporate failure (cfr. Rees (1990),cli.9, Foster (1986),cli.15, Ooghe and Verbaere (1985),p.22). Hence the emphasis on the financial characteristics of the company in Figure 1. However, even if a com-
FIGURE 1
FIGURE 1 FAILURE
I failed / did not fail
financial variables -...,
AA
Q 2
the absfracf
annual accounts available before t
I
Ae;B complete description of the financial evolution of the company up to time t other characteristics of the company
up to time t i T environment of the company
plete (i.e., arbitrarily detailed) description of the financial state of the company were manageable and available (which is not the case in practice; arrow A in Figure l),we can not expect that its future financial evolution can be deduced exlusively from its financial history. Nonfinancial characteristics of the firm and environmental influences are often less accessible to the decision maker, but undoubtedly contribute to the evolution of the firm between t and t + T (arrows (B)). Nevertheless, non-financial characteristics of the individual firm and environmental variables can be and have been added to failure prevision rules. (For example: activity codes (non-financial) and the average values of financial data in the firm's industry (environmental) are used as inputs to the failure prevision rules described in Platt and Platt (1990)). However, it is obviously impossible to refine the description of the company until the stochnstic element is eliminated from the prediction problem. As no theoretical model is available to describe the evolution of a company towards failure, a failure prevision rule can only be constructed by means of a stntisticnl algorithm, starting from a training sample of companies whose history is (partially) ltnown. The available description of a firm's history typically consists of a number of annual accounts. The number of variables in one annual account (not to mention several accounts) is too large with respect to the usual size of the available sample. For example: the total sample which selves to build
the 'complete model, 3 years before failure' in Ooghe, Joos and De Vos (1993) consists of 399 annual accounts. Each account contains the values of more than 600 variables. Consequently, a reduction of the izzlnzber of variables is necessary (arrow C in Figure l ) , not only for the sake of interpretation, but also to avoid instability in the estimation of the failure prevision rule (Hand (1981), section 6.2). The word 'failed' in the box labeled eventlprediction in Figure 1 can be defined in many different ways (see section 1I.C of the text). The arrows D in Figure 1 express that, in a typical failure prevision problem, the occurrence of failure is not simply a function of the annual accounts. Other firm characteristics and external factors add an extra stochastic component to the scheme. Obviously, a successful failure prevision rule is one which, in many s thz cases (a 'case' being a compar?y at a given moment), d e d ~ c e from values of the input variables a prediction ('fails in [t,t+T]' or 'does not fail in [t,t+T]') which agrees with the actual event.
B. Lack of theoiy Many authors and critics of failure prevision studies have remarked that economic theory does not provide any sound guidelines for the failure prevision problem (e.g., Foster (1986),p.559 and Rees (1990),p.406). The following pragmatic stand taken by Foster argues against onesided criticism of empirical failure prevision research without a firm theoretical backing: '(The) major contribution of the research (on financial distress) to date is documenting empirical regularities. This documentation is important both for decision making by creditors and management and to researchers wishing to model economic aspects of financial distress' (Foster (1986),p.560). The lack of a proper theory places the burden of the validation of a failure prevision rule entirely on the sampling method and the statistical algorithms. Consequently, efforts to refine the statistical methodology are at the origin of many papers in the failure prevision literature (cfr. section V1I.B).
C. The definition of fai1~u.e
There are many different failure prevision problems, and an important parameter that differentiates between them is the definition of failure. In one context, a company may be considered to 'fail' when it is unable to continue its regular interest payments to a bank. In another case, 'failure' may be defined as a juridical declaration of insolvency (bankruptcy in the American terminology). We have already noted (arrows D in Figure 1) that, for the definitions of failure that are of practical interest, 'failure' can not be expected to depend on the financial evolution alone, and a fortiori not on annual account variables alone. (Foster and Rees, among others, have pointed out that difference between the abstract concept of 'financial distress' and observable 'failure', cfr. Rees ((1990),p.392) and Foster ((1986),p.535). For example: even when two companies are in exactly the same distressed financial situation, one company may be forced to interrupt debt repayments, while the other may be rescued by a related company. Gilbert et al. found empirical evidence '( ...) suggesting that the resolution of distress is influenced by other, perhaps nonfinancial, factors' (Gilbert, Mrishnagopal and Schwartz (1990),p.162). It is not sufficient to formulate a definition of 'failure' which is relevant with respect to the decision context. As it is necessary to construct the failure prevision rule from a training sample (cfr. I1.A and III.B.5), it must also be possible to unainbiguously detect failure (a posteriori) in a company. For that reason, academic researchers often use a juridical definition of failure in their studies. While bankntptcyprevision is a relevant research topic, the fact that it has received more attention than other types of failure prevision seems to be due only to practical considerations. It is not obvious that results from bankruptcy prevision research can be transplanted to, for example, the decision problem of predicting a given type of loan default in the population of the industrial customers of a bank.
D. Definition of the pop~llation One aspect of the precise formulation of a failure prevision problem is the definition of a population, i.e., a description of the group of companies which are involved in the problem. Indeed, no failure prevision rule can be expected to function for every type of company. As
failure previson relies in practice on statistical methods (II.A, II.B), clear guidelines (i.e., the definition of a population) are needed for the collection of samples. When a prevision rule has been derived and validated by means of the samples, the decision maker must be well aware of the inherent limitation to a given type of companies, and restrict the application of the rule to this population. First of all, the population must be relevant for the decision maker. For example: a commercial bank may be specialized in lending to Belgian medium sized companies in the building sector. Typical population boundaries are nationality (because the availability and meaning of accounting variables are partly determined by law), the size of the company, its activity code (for example NACE (1970)), and juridical criteria. uf statistics, the chuice of the populatior, should From the viex~~point be a compromise between the need for sufficiently large samples and the advantage of a 'homogeneous' target group. When there are sound reasons to believe that a certain subtype of companies is unsuitable for assessment by means of a statistical failure prevision rule, it is good practice to exclude them from the population. Indeed, they are likely to disturb the statistical derivation of the prevision rule for the other companies. For example: it may be advantageous to exclude companies when their annual accounts violate too many formal requirements (V1I.A). (The value of such a policy must be measured empirically.) Naturally, the exclusion of problematic cases from the population is only warranted if they can be identified at the moment of decision making. (While this may seem obvious, several failure prevision studies exclude cases which can only be identified ex post (VHI.B.4).) E. Choice of a criterion ofpredictive peformance As perfect prediction is utopian, it is necessary to define a measure of the predictive ability of potential failure previson rules. In most failure prevision studies, this is done (sometimes implicitly) by assigning costs to erroneous predictions. (An example is given in 1II.B) The relevant costs might be, for example, on the one hand the comlnercial loss incurred by a bank denying a loan to a 'healthy' company, and on the other hand the credit loss, when a loan is granted to a company which later proves to be unable to repay a part of the capital or the interest (cfr. Ooghe, Joos and De Vos (1993), Altman, Haldeman and Nara-
yanan (1977)). A prevision rule is better when its average error cost per decision is lower. In practice, the future performance of a potential previson rule can only be estimated statistically. We confide in a prevision rule with a sufficiently low estimated average error cost per decision. F. The nvailability of data Finally, an important 'parameter' which distinguishes between failure prevision problems is the availability of data to build and evaluate prevision rules with. The input of many failure prevision rules is limited to annual account data (1I.A) because they can be collected cheaply. Access to data may also be limited by the specification of a maximai sampie size (due to costs of data acquisition and processing) and a time period; for example: an important change in accounting standards and a technical improvement of the access to Belgian annual accounts (access through CD-ROM instead of microfilm) makes 19841 1985 a natural limit for such a time period in Belgian failure prevision research (Meulemans, Van Acoieyen, Flamee and Merchiers (1992), Balanscentrale (1991)).
The specification of a failure prevision problem involves a description of the meaning of failure, the prediction objective, the relevant population, the measure of predictive performance and the availability of data. A precise specification is necessary to avoid errors in the validation a ~ l dmisuse of prevision rules. The solution of a failure prevision problem consists of the construction of a failure prevision rule and the estimation of its future performance. This is done by choosing suitable sampling techniques and statistical algorithms and applying them to derive a prevision rule (%II.A, II1.B) and a performance estimate from the available data (III.B, V1I.B). For an introduction to the 'traditional' methods of solving failure prevision problems, we refer to part 1II.B for the statistical background, and to Altman, Haldeman and Narayanan (1977) as a typical example. T h e 1977 work of Altman e.a. is a convenient benchmark for recent and future work in the dornain of failure prevision.
Failure prevision research is concerned with discovering suitable methods to solve failure previsioil problems (part V1 and VII) and with interpreting successful failure prevision rules (part IV). III. FAILURE PREVISION PROBLEMS AND STATISTICS A. Statistical classificntion methods The problem of assigning a given object to one member of a set of mutually exclusive classes (also called categories, groups,...), is an important topic in statistics. 'Classification', 'clustering', 'discrimination', 'separation', 'discriminant analysis' are conventional terms to refer to (parts of) this branch of statistics. The main reason for the large amount of attention devoted to statistical classification methods is, undoubtedly, the enormous range of possible applications. Among the areas of application are: speech recognition, image processing, medical diagnosis and taxonomy. Moreover, statistical classification is linked with (or part of) other major topics in statistics, e.g.: estimation, statistical tests, multivariate analysis, regression analysis. As a consequence, it is an interesting subject for anyone who wants to get familiar with the general principles of statistics. Most books on multivariate statistics contain one or more chapters that can serve as an introduction to statistical classification, for example Johnson and Wichern (1992). An overview of the state-ofthe-art in statistical classification at the beginning of the eighties can be found in Hand (1981). Some sources for more recent work are: Journal of the American Statistical Association, Decisioil Sciences, European Journal of Operational Research, Journal of Marketing Research, Journal of the Royal Statistical Society B, Technometrics, Biometrics, Interilatioilal Journal of Accounting, Journal of Economet r i c ~IEEE , Transactions on Illformation Theory and Pattern Recognition. B. Basic concepts in discrirnirzant analysis and statistical classification
1. I n t r o d u c t i o n Almost everybody has some understanding of statistics and almost every reader of this article probably remeinbers 'statistics' (including probability theory) as - to put it mildly - a rather difficult subject mat-
ter. Discriminant analysis and classification theory - the subjects of this section - can be seen as core subjects of multivariate statistics which is the tougher brother of the ordiilary statistics most of us already find difficult. Instead of getting discouraged by the above, let us take up the challenge! The simple reason is that the subject matter is absolutely fascinating and has widespread real-life applications (for example in failure prevision, as is discussed in the other sectio~zsof this paper). First, what is discriminant analysis / statistical classification? The term discrimznarzt nr?alysis goes back to Sir R.A. Fisher - one of history's most famous statisticians. We follow the description of Johnsoi~and Wiclzern ((1992),ch.l1): [he goal of discriminant analysis is 'to describe either graphically (in three or fewer dimensions) or algebraically, the differentia! feat::res of objects (observations) fro^ several known collections (populations)'. 'We try to find 'discriminants' whose numerical values are sucli tlzat the collections are separated as much as possible.' Statisticul classification, in turn, refers to a set of techniques which goes a step further and develops precise rules that are used to assign new objects to two or more already labeled classes of objects. A well-known illustration can be found in finance namely in credit grarztirzg. Given a specific request for credit, a bank manager has to decide whether to grant this credit or not (the binary nature of the decision is a simplificatioil of reality). Some (if not, most) banks use discriminant analysis / statistical classification to support credit granting decisions. The values of carefully selected variables such as age, income, family size, mortgage levsl, mobility, etc. are combined into a score (through a linear function), for each historical credit request, and confronted with the actual payback behaviour. Discriminant analysis divides the population in two groups of 'good' and 'bad' risk, and the cutoff point between 'good9and 'bad' is a particular critical score (in practice there is a so-called grey area). New credit requests are classified: if the individual measuremeilts lead to a score below the critical score no credit is granted; scores above the critical value lead to acceptance. Credit granting is discussed in Foster ((1986),ch.16). In the following subsections we will give an informal introduction to the key concepts of statistical classification, requiring only common sense and the level of knowledge of probability theory common to the average church-going gambler. For those who want to indulge
in some mathematical escapades, we refer to Johnson and Wichern ((1992),ch.11)-.The discussion of this section will be based on a game described in the following illustration. 2. I l l u s t r a t i o n Consider a contaminated game. If game A is played the numerical outcome can be 1, 2, 3, 4, 5 or 6; if game B is played the numerical outcome can be 5 , 6 , 7 , 8 , 9 or 10. For each game, each outcome has equal probability, i.e., 116. The 'observer' does not know which game is being played so, if outcome 5 or 6 materialise there is ambiguity. But the design of the contamination is such that in 80% of the cases game A is being played (hence in 20% of the cases game E), and this is known to the observer. (These percentages are called the aprioriprobabilities of the games.) Thepayoffstruct~~re is as follows. If the observer guesses the correct game he receives zero. If he claimes it was game B but, in fact, it was game A, the observer must pay 0.5; alternatively, if he guessed it was game A where, in fact, game B was on, the observer must pay 2. In notation:
(The parameters C(B1A) and C(AIB) are often called the error costs or nzisclassification costs.)
3. E v a l u a t i o n of d i f f e r e n t s t r a t e g i e s The observer now wants to develop a strategy (also called decision rule, classification rule) that, in a sense, minimizes the average cost to participate in this game. On top of that the observer is a straightforward person and wants to pick one of the following strategies1.
S1 S2 S3
DOM(+A) DOM(3B) DOM(+A) DOM(7.B) DOM(+A) DOM(7.B)
= { l 2, , 3, 4) = 15, 6, 7, 8, 9, 101 = 11, 2, 3, 4; 51 = 1'6, 7, 8, 9. 101 = { l ,2, 3, 4, 5, 6) = (7,8, 9, 10)
Although straightforward, the observer is wise, other strategies need not be considered2. Figure 2 will be helpful in the following analysis. FIGURE 2
outcome
1
2
3
1
4
1
5
1
6
7 1 8 1 9 1 1 0
EVALUATION O F STRATEGY S1. Suppose a game A is being played, then the probability that the observer chooses for game B is given by 116 116, relating to the outcomes 5 and 6, or P(BIA) = 113 Similarly we note that P(AIB) = 0, since outcomes 1 , 2 , 3 and 4 cannot occur under game B and are precisely those that, if realised, lead to the choice of game A. Clearly, the probability that game A is being played is 0.8; if A is played the chance oimisclassificatioll is 113 and this costs, if this event occurs, 0.5 units; or on the average: 0.8 ;'P(BIA) " C(B1A) = 0.8 " 113 " 0.5. Similarly we have for game B: 0.2 '' P(AIB) " C(A1B) = 0.2 " 0 " 2.
+
Putting the pieces together, the average (expected) cost of strategy S1 is then: EC(S1) = 0.8 :'; P(B1A) ''; C(B1A) + 0.2 " P(A1B) " C(A1B) = 2/15 EVALUATION OF STRATEGY S2. See figure 2, to conclude: P(B1A) = 116 P(A1B) = 116, so that: EC(S2) = 0.8 '* 116 " 0.5 + 0.2
:';
116 :'2 = 2/15.
EVALUATION OF STRATEGY S3. This gives: P(BIA) = 0 P(AIB) = 113 and EC(S3) = 2/15 CONCLUSION The observer is indifferent between S1, S2 and S3, the reason being the symmetry in the prior probabilities and the misclassification costs: 210.8 = 0.510.2. 4 . I l l u s t r a t i o n of a n a s y m m e t r i c a l s i t u a t i o n Now change the probability that game A is being played form 0.8 to 0.6, and we find: EC(S1) = 0.6 "' 113 0.5 = 1/10 = 6/60 EC(S2) = 0.6 :"l6 :l 0.5 + 0.4 :k 116 = 11/60 = 0.4 " 113 "' 2 EC(S3) = 16/60 Strategy S1 becomes optimal! : "
5 . M e a s u r e s of p e r f o r m a n c e Let us continue with the last example (no symmetry: P(A) = 0.6 and P(B) = 0.4) to discover some more insight. The probability that an observation (outcome) comes from game A and is 'classified' (guessed) as being so is (using Sl!): P(A) :'P(A1A) = P(A) '"1-P(BJA)) = 0.6 * (1 - 113) = 0.4. Next, the probability that an outcome stems from game B and is 'misclassified' (wrongly guessed) as from A is: P(B) " P(A1B) = 0. The formula P(B) " P(B1B) computes the probability of correct classification (guess) in B and is 0.4. Finally, F(A) * P(B1A) = 0.6 " 113 = 0.2 is the probability of a misclassification as B. In a table:
correct game
A
B
classified as A B 0.4 0.2 0 0.4
The above table is important since its off-diagonal elements give information about the 'performance' of the chosen strategy: on the average 20% misclassifications occur. I11 practice, the above table is based on experimental data. A classification rule (which has been derived from a training sar?zple) is evaluated by classifiying the cases of a testing sample. Both training and testing sample consist of cases with both outcome (1,2, ..., 9 or 10) and game (A or B) known. The result of the test is summarized in the table, so that the elements of the table are frequencies related to a known sample size. The table is known as the confiuior~nzatrix, which looks like:
classified as A B correct game
A B
na(70) nsa(lo) n . ~
nAB(25) nBB(95) n . ~
n ~ . n
The overall sample size is n (=200) and:
The reason for introducing the confusion matrix and the (fictitious) numerical example is to illustrate the concept of apparent (or estimated) error rate (APER~)which is defined as APER = (nAB + nBA)1 n = (25
+ 10) 1 200 = 0.175
Recall again our contaminated game; in that case the probabilities of the various outcomes are known and we can be a bit more sophisticated than using APER. Instead we calculate the error rate4 associated with strategy S1: ER(S1) = P(A) '"(BIA) + P(B) '" P(A1B) = 0.6 :': 113 t- 0.4 * 0 = 6130, where the first term gives the probability that - given game A is played - the observer chooses for game B, the second term is similar. Furthermore, ER(S2) = 0.6 ";l16 + 0.4" 116 = 5/30 ER(S3) = 0.6 " 0 + 0.4 "'13 = 4/30 Obviously, the strategy that minimizes the cost of misclassification (i.e., S1) need not be the one with the smallest error rate, our example illustrates this point. We claim now that we have covered the essential concepts of discriminant analysis and statistical classification, at least at a conceptual level.
C. Fail~ireprevisorz and statistical classificatiorl A failure prevision problem (part 11) can be modeled as a statistical classification problem (1II.B). In his 1968 paper, Altman used linear discriminant analysis (Johnson and Wichern (1992),ch.l1) to construct a failure previson rule (Altman (1968)). In subsequent research, statistical classification methods, and more specifically linear discriminant analysis and logit analysis (e.g., Espahbodi (1991)) have been the most widely applied algorithms (see Rees (1990),section 9.4 for a discussion of the application of linear discriminant analysis and logit analysis in failure prevision). From the viewpoint of statistics, failure prevision problems are an interesting playground for experiments with statistical classification methods. Since 1984, a large amount of Belgian annual accounts (for example: more than 100000 in 1988 alone, Balanscentrale (1990), each containing the values of more than 230 variables (in the case of a complete account more than 630 variables) can be accessed fairly easily by means of the Balanscentrale CD-ROMs (Balanscentrale (1989), Balanscentrale (1991)). By sampling from subpopulations (partitioned, for example, according to activity codes, NACE (1970)) and defining different subsets of variables, a varied collection of input data sets for statistical classification can be acquired. They can be employed to test the performance of classification methods on real-life data, or they can be used as a basis for a larger set of simulated test problems.
D. The time parameter An additional interesting aspect of failure prevision datasets from the viewpoint of a statistical analysis is that a time parameter is involved. When a prediction rule is applied to a company at a given moment, or when it appears in the training sample, usually several of its annual accounts are available. Obviously, consecutive annual accounts are not independent from each other, and it may be worthwhile to model this relationship and include it in the prediction rule and in the algorithm which derives the prediction rule from the training sample. Note, however, that consecutive annual accounts can not be simply interpreted as repeated measurements of a static underlying quantity: a company is a dynamic entity, it may evolve towards or away from failure. Moreover, as the population of interest (which supplies the
training sample) and the economic environment change continually, the question of updating the failure prevision rule is raised. As classification inethods such as linear discriminant analysis, which for many years have been the standard methodology for failure prevision, do not coiltaiil an explicit time parameter, there may be some scope for improvement of failure prevision rules through the application of methods which incorporate the time aspect. As a matter of fact, the time aspect has received attention in the reccnt literature, for example in Keasey (1990), Falbo (1991), Laitinen (1993), Euoma (1991), Kassab (1991). This topic is discussed further in part VI.
E. A didacticctl fool Johnson and Wichern included failure prevision problems in their textbook on inultivariate statistics, to illustrate the application of statistical classification methods (Johnsoi~and Wicheril(1992),p.564). This indicates that failure prevision problems can serve as a didactic aid to teach important elements of statistics (sampling, estimation, validation, variable selection, modelling), particularly to students in applied economics. The fact that in recent years several students of the K.U. Leuven Department of Applied Economic Sciences opted for a failure prevision experiment as the subject of their licentiate's thesis also attests to the value of the failure prevision problem as a didactic tool. They have, for example, investigated variable selection methods (Dierckx (1992)), compared logit analysis with linear discriminant analysis (Ponnet (1992)), explored the benefits of pooling the results of different predictions by means of a simple combination rule (Dauwe (1993)), examined the usefulness in failure prevision of risk of ruin models (the type of models employed in Vinso (1979), Peeters (1992)) and of separation methods based on mathematical programming (cfr. Gochet, Srinivasan, Stam and Chen (1993), Troch (1992)), replicated with Belgian data the investigatioil in Falbo (1991) of a transformatioil to 'dynamic variables' (Vandingenen (1994)), and conducted an experiment with a semiparametric multigroup discriminant method (including a cost matrix) (Boeykens (1994)).
IV. FAILURE PREVISION AND ACCOUNTANCY Financial statement numbers acquire a meaning through their definition (i.e., the recipe which prescribes how to calculate them from the accounts) and through their application in analysis and decision making. An important part of academic research in the field of accountancy aims at testing accepted interpretations of financial statement numbers and discovering new meanings of the financial data. Statistical methods have played a significant role in that type of research. One type of investigation looks for (in)consistencies between properties of the joint distribution of financial statement numbers and their traditional meaning. A representative (and pioneering) examp!e of this branch of research was reported in Pinches, Mingo and Caruthers (1973). By means of a factor analysis algorithm (Johnson and Wichern (1992),ch.9), Pinches et al. analysed the sample covariance matrix of a set of 48 financial ratios. Thus they constructed a taxonomy of the set of ratios, consisting of seven distinct subgroups. Pinches et al. showed that their taxonomy was remarkably stable (as a function of time), and that it could be interpreted in terms of the traditional meaning of the ratios. (For example: there is a subgroup of 'return on investment' ratios, a subgroup of 'capital intensiveness' ratios, etc..) The mere fact that such an interpretation of the empirically derived taxonomy is possible, strengthens our confidence in the meaning which has gradually been attributed to the ratios through their application in real-life problems. A more recent example of this type of work is Gombola and Ketz (1983). Gombola and Ketz use factor analysis not only to describe the structure of a set of financial variables, but also to compare different sets. That enables them to differentiate between cash flow proxies that load mainly on the factors of the taxonomy derived in Pinches, Mingo and Caruthers (19731, and those that constitute 'new' information (adding a separate cash flow factor to the taxonomy). Moreover, they demonstrate that there is an evolution in the meaning of the cash flow proxies (see also Gombola, Haskins, Ketz and Williams (1987)). A second branch of research investigates the meaning of accounting variables by looking at their usefulness in particular contexts. As an example, we refer to chapter 8 in Rees (1990), where the prediction of mergers is discussed. A second example is corporate failure prevision, the subject of the present text. (The prediction of failure
has mainly been based in financial variables, and among those predominantly on annual account variables (Rees (1990),ch.9). In such work, taxonomies of financial variables can serve as a starting point, in order to limit the complexity of selecting a set of variables with significant predictive power (e.g., Gombola, Haskins. Ketz and Willialns (1987)). V. FAILURE PREVISION RESEARCH IN BELGIUM
Broadly speaking, three objectives can be distinguished in failure prevision research: - the development of a practical decision tool, using the available sta. . tlst~calmethodology; - the improvement of the existing statistical failure prevision methods; - the application of failure prevision methods as a probe for discovering facets of the meaning of accounting variables (part IV). Some failure prevision studies (e.g., Altman, Haldeman and Narayanan (1977)) are concerned with the three objectives simultaneously. Belgian failure prevision work has been directed mainly towards the first and the third objective. Researchers at the R.U. Gent (Ooghe et al.) have constructed failure prevision rules that serve as benchmarks for other and future work. The main results have been reported in Ooghe and Verbaere (1985), Ooghe and Van Wymeersch (1988),ch.16 and Ooghe, Joos and De Vos (1993). They applied the generally accepted methods of (at first) linear discriminant analysis and (later) logit analysis, and put considerable effort in collecting large training and testing samples, ensuring the quality of the individual annual accounts, and selecting a set ofvariables with a sufficiently strong predictive power, with a meaningful interpretation, and which is generally available to the external analyst. Much attention was devoted to a clear presentation of the predictions (particularly in Ooghe, Joos and De Vos (1993)). An interesting feature of the studies by Ooghe et al. is, that they report sufficient information to enable a user to assess the performance of the prevision rules in combination with any error cost ratio (for example: tables 8-11 in Ooghe, Joos and De Vos (1993)). Thus they were able to build prevision rules which are useful for practitioners. (For example: the rules are applied by the Vlaamse Commissie voor Preventief Bedrijfsbeleid, which monitors Flemish companies in order to detect
companies at risk and prevent failure (Vlaamse Commissie voor Preventief Bedrijfsbeleid (1992)). Using the failure prevision rules by Ooghe et al. as benchmarks, Declerc et al. have tested the predictive power of alternative types of financial variables (funds flow variables in Declerc, Heins and Van Wymeersch (1992b), value added ratios in Declerc, Heins and Van Wymeersch (1992a) ). Discriminant scores generated by the prevision rules of Ooghe et al. have also been used as a variable in empirical studies of corporate financial structure (Ooghe and Van Wymeersch (1988),ch.17). By comparison with another small country such as the Netherlands, where it seems to be difficult to collect sufficiently large samples of annual accounts (cfr. Wijn (1988),ch.4), the Belgian researcher has the advantage of a large database of annual accounts which is relatively easily accessible by means of the Balanscentrale CD-ROM's (Balanscentrale (1989)) Balanscentrale (1991)). Surprisingly perhaps, these datasets have not (yet) been exploited in research aimed at improving the statistical approach to failure prevision (such as Kassab, McLeay and Shani (1991), Laitinen (1993)), or research which uses failure prevision datasets as a laboratory, e.g.: for the study of (statistical) classification algoritms (Mahmood and Lawrence (1987), for example). Large samples of Belgian accounting data are available, but researchers have also stressed their limited formal quality (e.g. Bettonville, Jegers and Vuchelen (1992)) Ooghe, Joos and De Vos (1993)). However, quantitative measurements of the effects of these formal deficiencies in a failure prevision context have not yet been reported (VI1.A). VI. RESEARCH TOPICS O F PRESENT AND FUTURE INTEREST: T H E TIME PARAMETER IN FAILURE PREVISION A. Active observer vs. passive observer
The usual statistical classification methods, such as linear discriminant analysis are 'static', i.e.: there is no explicit time parameter in their forn~ulation(II1.B). However, time does get involved when a statistical classification method is used for prediction, for example: in failure prevision.
A consequence of the interaction between time and a statistical classification method in a prediction context is that two distinct classes of prediction problems arise. This becomes clear when one looks at the work of Altman et al. reported in Altman, Haldeman and Narayanan (1977), and the criticism of this work formulated in Wood and Piesse(1988). The Altman et al. (1977) article applies the 'standard' statistical classification algorithm, linear discriminant analysis. Wood et al. argue that the (very good looking) classificatioll accuracy figures reported by Altmail et al. about their 'Zeta-model', do not measure the performance of that model in a 'real life' prediction situation. The essential cause of this different evaluation of the performance of the Zeta model is, in our opinion, that the model is looked at from two different points of view. Altmail et al. evaluate the performance of d ap11ss21)e~ b s e ~ v eThis v. the Zetl-mode! as if it is I tool t h ~ist ~ s e hy observer has no influence on the population of companies which are classified by the model. In contrast, Wood and Piesse are interested in situations where the predictions of the failure previson model influence the population and change it gradually (active observer). Beaver already mentioned the colnplexity of the active observer framework in Beaver (1966). Effects that have to be taken into account in the description of an active observer are, for example, that the mere prediction of failure may cause the failure of a healthy company (because investors loose faith), but that it also may contribute to the survival of a company in trouble (when it triggers restructuring efforts that would have come too late otherwise). The discussion in the present paper will be limited to the passive observer framework.
B. The time pamlnetev in the passive observer franzewovk Even without the intricacies of the active observer situation, the time parameter is deeply involved in the failure previsioil problem. 1. T h e p r e d i c t i o n o b j e c t i v e Obviously, there is a time aspect in the formulation of the objective of the prediction (i.e., the definitioil of the two groups, when a binary classification method is applied) (the parameter T in Figure 1, section 1I.A).It makes no sense to inquire whether a firm will avoid failure indefinitely. The interesting question is whether it will survive, say, the next two years unscathed.
In surprisingly many failure prevision studies the value of the time horizon T is unclear because of inconsistencies between the failure prevision objective and the structure of the testing sample (VII.B.4). 2. Consecutive measurements
Usually, several annual accounts of the same company are available for analysis. This raises the question of how to model the relationships between these consecutive measurements. The relationships between consecutive annual accounts have often been neglected, i.e., the annual accounts are regarded as independent observations (in the training sample as well as in the testing sample and during application of the prevision rule). A typical exainple of this approach can be found in Ooghe, Joos and De Vos (1993). When several annuai accounts from the same company are fed into the prevision rule one at a time, the result is a list of potentially conflicting predictions. Thus it is necessary to formulate a rule to resolve these contradictions and produce a single prediction. More often than not (as in Skogsvik (1990)), such a rule is lacking. One of the rare studies where this irzconszsterzcyproblem receives some attention is Keasey, McGuiness and Short (1990). It is also discussed in Ooghe and Van Wymeersch ((1988),p.333). (A clear description of the problem can also be found in Boeykens (1994). Moreover, from the viewpoint of the construction of a powerful failure prevision rule, it is probably unwise to neglect the relationships between consecutive annual accounts. Taking them on board in the model may amount to exploiting previously neglected information. This idea is not new: in one of the earliest statistical failure prevision studies Meyer and Pifer (1970) already tried to tap the extra information in consecutive measurements by means of a transformation to dynamic variables (trend, coefficient of variation,...). More recent work which attributes considerable predictive power to dynamic variables was reported in Falbo (1991). A replication of Falbo's study on Belgian data by Vandingenen indicates that Belgian failure prevision models can be improved by the introduction of dynamic variables (Vandingenen (1994)). 3. Nonstationarity Altman, Haldeman and Narayanan (1977) listed several reasons to construct an update of Altman's pioneering 1968 prevision rule (Alt-
man (1968)). As another example, a change in the meaning of cash flow variables in the U.S.A. around 1973 was empirically detected by Gombola et al. (1987). In general, due to changes ill the characteristics of failure, and the availability and meaning of (accounting) data, prevision rule and the assessment of we can not expect a successf~~l its performance to rcinaiil valid indefinitely. While it may be easy to detect certain changes (such as changes in accountancy legislation), a measurement of their impact is impossible without empirical testing. Naturally, it would be desirable to qualify a failure prevision rule by an expected lifetime (period of applicability). In practice, the oilly feasible way to handle the problem of nonstationnnly is to test the prevision rule at regular intervals (for example after the end of each accounting period, when new data become available). of a prevision rule is based on The regular monztorzng and ~~pdating the collection of recent data. This needs to be done with care. A failure prevision dataset is structured in two dimensions: according to 'cases' (companies) and according to 'time' (accounting periods). Naturally, it is desirable to illcorporate the 'old' dataset in the 'new' (updated) dataset. On the other hand, the new dataset ideally should be structured such that it can not be distinguished from a dataset which was collected 'from scratch'. This requires both the collection of the latest annual accounts of companies from the old dataset, and the inclusion of new cases. An additional question that arises here is whether the accounts of the oldest year should be eliminated when data of the most recent year are added. (This question is linked with the problem of assigning time-dependent weight factors to the data or not.) We are not aware of an investigation of these practical questions in the failure prevision literature. A second consequence of the continuing changes in populations of companies and in the economic enviroll~nentis the need for intertempornl validntzon of failure prevision rules. This is explained in section VI1.B. Note that, by regarding the (sub)populations in failure prevision as dynamic entities, we link the subject to the research on the dynamics of organizational populations (e.g., Hannail and Freeman (1989)).
4. Several related prediction problems In the typical failure prevision framework, a time horizon T is selected, which divides the population of currently existing companies into
two subpopulations (II.A), for example: the companies that will disappear within three years, vs. the companies that will survive for at least three more years. A decision maker such as a bank may offer several types of loans with different terms or may need to make a distinction between customers threatened with imminent failure vs. those that will get in trouble in the more distant future. Many failure prevision studies take that into account by constructing several (binary) prevision rules, each with a different value of T. (For example Skogsvik (1990) provides (binary) prevision rules for one, two, ... and up to six years before failure.) In subsection VI.B.2 we mentioned the problem of conflicting predictions when a (single) prevision rule is applied to several consecutive annual accounts. The same problem arises when several binary prevision rules ('one year before failure', 'two years before failure', etc.) are applied to a single input. When the 'one year before failure' rule predicts imminent failure and the 'three years before failure' model promises survival for at least three more years, an additional judgment is needed to resolve the conflict. This problem has been neglected in most failure prevision work (a.0. in Skogsvik (1990), Altman et al. (1977)). (However, Ooghe et al. have devoted some atteiltion to it in Ooghe et al. (1988),p.332)). Some recent research in failure prevision has aimed at modeling the relationships between different predictions based on the same illput. The potential advantages are that conflicts can be avoided and that hitherto wasted information may be exploited. For example: Keasey et al. modeled the relatioilships between different binary (classification) rules by replacing them by a single inultigroup classification rule (using inultilogit analysis) (Keasey, McGuiness and Short (1990)). Another approach consists of applying a survival analysis model instead of a statistical classificatioil model (Luoma and Laitinen(1991)). Survival analysis estimates the distribution function of the survival time of a company. In principle, any binary or inultigroup classification can be performed as soon as the survival time distribution is available. In contrast with traditional statistical classification methods, a survival analysis method models the time parameter explicitly. Also survival analysis has the potential to make better use of the available dataset because it explicitly allows for censored observations.
5. C o n c l u s i o n During the first two decades of multivariate statistical failure prevision research, changes to the statistical methodology have mainly been limited to refinements and additions that could be incorporated in the statistical classification framework introduced by Altman (1968). Validation by means of holdout samples or the leaving-one-out method has widely been accepted. Alternative classification methods have been tried out (for example logit analysis vs. linear discriminant analysis) without much effect. The necessity of intertemporal validation has been stressed (Joy and Tollefson (1975)), but is often neglected, and the idea of using dynamic variables in the classification rule has been around at least since 1970 (Meyer and Pifer (1970)). At the end of the eighties. Rees has formulated his opinion about failure prevision research as follows (Rees (1990),p.420): 'The straightforward development of the empirical base and statistical sophistication of failure prediction models has proved a sterile branch of research. Typically, the model has retained discriminant ability whatever the methodology used, but has exhibited little improvement to reward the efforts of the researchers.' Subsequently, he draws attention to 'alternative' developments, i.e. the use of new types of variables (such as capital market information, funds flow variables and submission lags). However, the recent studies which concentrate on the time aspect of failure prevision (e.g. Falbo (1991), Keasey et al. (1990), Luoma et a1.(1991)) may be the onset of a departure from the traditional statistical framework of failure prevision. In our opinion, the following considerations are important for future developments in that direction. Firstly, several time aspects of failure prevision are related. ( For example: modelling the relationships between consecutive annual accounts of the same company can be expected to reduce the conflicts between predictions from different binary models applied to the same data.) In future studies, several time-related issues should be looked at simullaneously. Secondly, a basic issue in statistical modelling is the search for a compromise between the limited sample size (limited availability of information) and the complexity of the statistical model (cfr. Hand (19811, section 6.2, Flury (1988)). (The need for such a compromise probably explains the predominance of linear models, cfr. Altman et al. (1977)). Until now, the degrees of freedom of the data have been
used primarily to model the relationships between variables from the same accounting year. When we try, in addition, to model the intertemporal relationships, we add complexity. A definite answer about the benefits of a better modelling of the time parameter can only be reached when, in the search for the most significant relationships, the cross-sectional and the intertemporal parameters are investigated simultaneously and on an equal basis. VII. TWO ADDITIONAL EXAMPLES O F (FUTURE) FAILURE PREVISION RESEARCH TOPICS A. The potential role of formal quality nzeasurenzents in failure pr.evisiotz The meaning, in a broad sense, of an annual account variable is more complex than a mere technical accounting definition which specifies how to calculate its value. Its meaning is shaped through its use by decision makers, it is not static, and has to be discovered instead of being known once and for all (part IV). However, the technical definitions of the annual account variables establish a series of exact relationships between them. Once the values of a number of accounting variables have been fixed, the range of potential values of some other variables is restricted. When one or more of these restrictions are not satisfied in an annual account, we say that its formal qunlily is not optimal. A fonnal qualzty measure serves to compare annual annual accounts with each other or with a benchmark value, regarding formal quality. Since large numbers of Belgian companies are legally bound to publish detailed annual accounts according to standardized schemes5, and since the data can be accessed by the public through the Balanscentrale (Balanscentrale (1989), (1991)), researchers have investigated the formal quality of the annual accounts. Results have been reported in, among others, Jegers and Buijink (1983), Buijink and Jegers (1983), Jegers and Buijink (1987) and Bettonville, Jegers and Vuchelen (1992). The following conclusions from Bettonville et al. are representative: - the number of faultless accounts is low (for example: at most 40% of the accounts of large Belgian companies in 1989); - the average number of errors per annual account is low (1.67 for large companies in 1989):
the quality of the accounts improves gradually (Bettonville et al. (1992),pp.6-7). Until now, the literature about the formal quality of Belgian annual accounts has focussed on the measurement of the quality in population~and as n function of the accou~ztzngper-iod.A typical result of this research is the one reported in Table 4 of Bettonville et al. (1992), which lists the number of errors per a~lnualaccount, averaged over a population of 'large' Belgian companies, and as a function of the accounting year (from 1984 onwards). Attemps to interpret the measurements are limited to the calculation of partial quality measures (e.g. the number of errors in the balance sheet), in stead of exploring relationships with other company characteristics. (A few exceptions can be found in Jegers et al. ((1987),pp.10-11), which states for example that the number of errors is negatively correlated with the size of the company.) Thus, in the existing literature, the formal quality of Belgia~lannual accoullts has not been investigated from the viewpoint of a particular application, such as failure prevision. Jegers et al. note that the occurrence of formal errors in an annual account does not mean per se that it is unsuitable for analysis and research. Whether it is useful despite the errors depends on the application at hand (Jegers et al. (1987)). A failure prevision rule serves to assess each company individually, at a certain moment. When the data about the company contain formal errors, one has to decide whether or not to proceed with the prediction. It would be interesting to investigate whether this decision can be based on a formal quality measure, which quality measure is most suitable, and whether it call be used to meaningfully qualify the predictions. Also, each annual accouilt in the training sample which is used to construct a prevision rule, must be evaluated in order to determine whether it is a suitable point of comparison. When an observation contains errors, one has to decide whether it will be discarded, corrected or maybe partly used. Whether or not the measurement of the formal quality of annual accounts can be helpful to make these decisions is a relevant (and apparently also as yet unexplored) research topic. The authors of Ooghe et al. (1993) stress that they have put considerable effort in detecting and correcting (as much as possible) the (formal) errors in the sample of their failure prevision study. However, they do not report a measure of the effect of these efforts, nor do they specify the quality measure, the precise nature of the decisions based on it, or the algorithm of correction. Thus, tlze -
questions whzch arise abo~ctthe for~nalquality of annual accoztnts in the require research of a different kind tharz what context offailzirep~~evision, has been reported sofa?.in the litemture. The formal quality measures which have beell used in the literature are all based on counting the number of violated restrictions. In Van Landeghem (1994), it is shown that this set of quality measures is too one-sided for future research into the use of formal quality measures in failure prevision. Moreover, the usual quality measures depend on subjective details in the formulation of the list of restrictions (which describes the formal requireme~lts).Therefore the set of quality measures is extended with a new type of measure based on the distance between an observation and an area representing the formally correct accounts (Van Landeghem (1994)). The distance based measures do not depend on snhjertive e!ements in the !ist of restrictions. Moreover, they offer an easier way to distinguish rounding errors and a potential method to correct (or rather improve) annual accounts with formal deficiencies.
B. The validation of failure prevision r ~ ~ l e s 1 . T h e o b j e c t i v e of v a l i d a t i o n : d i s c r i m i n a t i o n v s . classification When a statistical classification method is applied to a failure prevision problem, the objective is not to allocate the observations of the training sample, but rather to minimize the number of mistakes when new observations are assigned to the groups. (In the literature, the first objective is often referred to as 'separation' or 'discrimination', and the second is rather called 'classification', but this terminology is not standardized completely.) While a 'discrimillation rule' may be tested simply by applying the rule to allocate the observations of the training sample, the same method overestimates the accuracy of a classification rule (as the rule is inevitably better adapted to the training sample than to the population) (Hand (198l),section 1.4). This inethodological issue has been taken illto account already in the first statistical failure prevision work (e.g.: Altman (1968)), but nevertheless it is still being neglected in some of the more recent studies (for example in Espahbodi (1991), Falbo (1991)).
2. E s t i m a t o r s f o r t h e e r r o r r a t e s In practice, it is necessaly to estimate the accuracy of a failure prevision rule before it is actually added to the decision making process. Consequently, the data which are available for the construction of the prevjsion rule, can also be used to test the rule, and vice versa. Hence it is possible to use one large sample both for training and for testing. We have already mentioned that the 'resubstitution method' (i.e., allocating the observations of the training sample by means of the rule which needs to be evaluated) deliveres an accuracy estimate which is too optimistic. An obvious remedy is to split the available data in a test sample (holdout sample) and a separate training sample. This, however, prevents us from building the most reliable prevision rule by using the largest sample size. Fortunately, estimators can be found that allow all the data to be inserted in the design sample and nevertheless avoid the bias of the resubstitution method. A well-known estimator is the 'leaving-oneout' method, which was already studied by Lachenbruch in the sixties (Eachenbruch and Mickey (1968)). This estimator avoids the bias of the resubstitution method, although it is applied to the training set. The leaving-one-out method has probably become the most widely used accuracy estimator in the failure prevision literature (where it is often referred to as the 'Lachenbruch method' or a 'jackknife method'). The investigation of the properties of (potential) classification accuracy estimators is an important and active research topic in statistics (Hand (1986)). Attention has shifted from bias to the more informative mean square error criterion (which takes into account both the bias and the variance of the accuracy estimator). From that point of view, the leaving-one-out method is beaten by more recent estimators (such as bootstrapping estimators). Surprisingly, developments beyond the leaving-one-out method have been neglected in failure prevision studies. The importance of a careful validation of failure prevision rules warrants future research in this direction.
3. Intertemporal validation Between the publication dates of the annual accounts in the training and test sample, and the moment when the prevision rule is applied, the structure of the population and the environment may have changed. As the prevision rule is adapted to the 'historical' population and
environment, change will raise the error rates. Estimates of error rates which do not take nonstationarity into account may be overly optimistic. Joy and Tollefson have already stressed this need for intertempol-a1 vnlidntiorz in the seventies (Joy et al. (1975)). Nevertheless, many studies (for example Keasey et al. (1991), Ooghe et al. (1993), Gilbert et al. (1990), Skogsvik (1990)) simply ignore the effect of nonstationarity on the error rates. Joy and Tollefson cite Pinches and Mingo(1973a) as the only failure prevision work before 1975 that performs intertemporal validation. A recent example is Platt et al. (1990). Until now, intertemporal validation in failure prevision studies has always been performed by means of a holdout sample, with the disadvantage that a considerable part of the data is withdrawn from the training phase. Thus it s e e m worth~liihileto investigate whether the leaving-one-out method (or one of the promising recent accuracy estimators) can be adapted to incorporate potential nonstationarity.
4. Biased testing samples In many failure prevision studies, the sampling is deliberately biased in a way wich is expected to improve the predictive ability of the prevision rule. A typical example is the so-called matching procedure. It means that each observation (annual account) in the 'non-failed' subsample is chosen from a subpopulation of annual accounts in the same year, with the same size value, activity code, etc. as a given observation in the 'failed' subsample. Matching is expected to reduce the disturbing effects of variables such as accounting year, size, activity code, etc. which are believed to have some predictive power but which are not included in the set of prediction variables. However, the real effect (positive or negative) of matching can only be determined empirically, and obviously by means of a test sample which has not been biased along the same lines. When the error rates are estimated by means of the leaving-oneout method (or the substitution method), training and test sample are indentical, and consequently the bias of the training sample is present during testing (for example Laitinen (1993)). Many studies that apply the holdout method, construct the training sample and the holdout sample by splitting a biased large sample, with the same consequences (for example Keasey et al. (1991)).
The reliability of accuracy measurements in failure prevision studies is frequently reduced by additional types of bias in the testing sample. As an example, Figure 3 represents the '1984 model' in Declerc et al. ((1992a),p.361) (where random sampling in stead of matched sampling has been applied). In the 1984 model the training sample consists of annual accounts which describe the state of the companies at the end of 1984. The companies of the 'failed' subsample went bankrupt during 1987 or 1988. The companies in the 'non-failed'subsample still existed at the end of 1989. The training sample is reused to test the prevision rule (by means of the leaving-one-out method). As a concrete example, we assume that the prevision rule is applied by a decision maker at the end of 1989, to a set of companies which are represented by their 1987 annual account. We can think of this set of companies as consisting of a p a r t i t i o ~of~ three si~hsets: - the companies that will go bankrupt during 1990 or 1991; - the ones that will go bankrupt during 1992;
FIGURE 3
construction and validation data
I
bankruptcv dates 'failed' sample
potential bankruptcv dates 'non-failed' samole
application data
present
subset 1
subset 2
subset 3
the companies that will still exist at the end of 1992. (Obviously, the decision maker does not know which companies belong to each subset.) The predictive performance of the rule for the 199011991subset has been evaluated by the inclusion of the 'failed' subsample in the tests. Its performance in the third ('after 1992') subset has been taken into account by tests with the 'nonfailed' subsample. The cases of the 1992 subset however are not represented in the performance measure. As these are precisely the borderline cases, this bias or the testing sample probably causes the performance figure to be overly optimistic. (Note that, as the decision maker does not know in advance that a company will belong to the 1992 subset, it is impossible to restrict the application of the prevision rule to the first and third subset.) ies the strucSurprisingly, this type ~f bias (i.e., i n ~ ~ n s i s t e n ~between ture of the testing sample and the definition of failure and the relevant failure time) seems to occur frequently in failure prevision studies. Additional examples are Altman et al. (1977) and Ooghe et al. (1993) (Moreover, several authors, a.0. Skogsvik (1990) keep silent about the precise structure of their testing samples.) -
5. Additional research topics and conclusion Even in the passive observer framework ( V I A ) several additional research topics concerning the performance of failure prevision methods are of interest and have not been investigated yet. Within the scope of this article, it is only possible to mention them briefly: - when prevision rules or prevision algorithms are compared, it is necessary to have some idea about the magnitude of significant differences in error rates. Moreover, the comparison of failure prevision rules must be distinguished from the comparison of failure prevision algorithms. - some studies suggest mnking companies in stead of classifying them (e.g. Meyer et al. (1970)). However, a measure of the accuracy of such a ranking has not been suggested, let alone tested. - most failure prevision studies merely report error rates at one value of the error cost ratio (Ooghe et al. (1993) is a welcome exception). An equally relevant dimension of performance is the range of cost ratios where a failure prevision rule is significantly more accurate then some benchmark rule. This dimension of performance has hardly received any attention in the failure prevision literature.
In conclusion: there is probably some potential to make the future assessment of the accuracy of failure prevision more reliable, there is certainly a need to make it more complete, and the downward bias of error rate estimates (through the omission of intertemporal validation and biased testing samples) must be avoided. Due to the lack of support fi.onz economic theoly, the statistical validation of faihil-e previsior? rules is of major inzportance. Hence the importance of fziture research to inzprove it. NOTES 1.DOM' means 'domain'. The meaning of, for example, S1 is that if outcome 1; 2; 3 or 4 realises the guess is game A: otherwise thc guess is game B. 2.Randomized strategies do not belong to the observer's strategy space. If you do not understand this remark, consider yourself straightforward and wise. 3.The terminology concerning error rates is not completely standardized. Sometimes APER refers to the result of a particular type of estimation procedure for the erl-or rate. 4.Note that P(A1B) and P(BIA) are sometimes referred to as 'the (conditional) error rates'. 5 . A definition of this set of companies can be found in Ooghe ((1988). pp. 14-15) and Meulemans ((1992). pp. A.1-l1 and A.1-25).
REFERENCES Altman, E. I.; 1968; Financial Ratios, Discriminant Analysis and the Prediction of Corporate Bankruptcy, The Jouriznl of Finance 23, 4, 589-609. Altman, E.I., Haldeman, R.G.; Narayanan, P,, 1977, Zeta Analysis: a New Model to Identify Bankruptcy Risk of Corporations, Jo~irnnlof Banking a11d Fiilnr~ce1; 29-54. Balanscentrale, 1989, D e jaarrekeningen op CD-ROM, gebruiksaanwijzing, (Nationale Bank van Belgie, Balanscentrale), NBB 08189. Balanscentrale, 1990, Statistieken opgemaakt op basis van de jaarrekeningen voorgesteld volgens de schema's bepaald bij het koninklijk besluit van 8 oktober 1976, Boekjaar 1988, Verklarellde nota, (Nationale Bank van Belgie; Balanscentrale). Balanscentrale, 1991, D e Balanscentrale tot uw dienst, (Nationale Bank van Belgie, Balanscentrale). Beaver, W., 1966, Financial Ratios as Predictors of Failure, Empirical Research in Accounting, S~ipplementto lllc Jozimnl ofAcco~~izting Resenrch, 71-123. Bettonville, H., Jegers, M,, Vuchelen, J., 1992, D e formele kwaliteit van de jaarrekeningen van de grootste Belgische ondernemingen: 1977.1989, (V.U. Brussel), CEMS-Paper 262. Boeykens, P., 1994, D e werkwijze van ICeasey e.a. voor falingsvoorspelling, uitgevoerd met lineaire discriminantfuiicties in plaats van logilf~~ncties. Eel1 test met Belgische gegevens, licentiate's thesis, (Departement Toegepaste Economische Wetenschappen, K.U. Leuven). 31 maart. Bossier, G., 1992, De echte bedrijfsfit-o-meter, Il7te~medini~, Buijink, W., Jegers. M,. 1983, D e kwaliteit van de jaarrekeningen verspreid door de Balanscentrale: nieuwe resultaten, (Rijksuniversitair C e n t i - L IAntwerpen. ~ Faculteit Toegepaste Economische Wetenschappen), working paper 83/13. Dauwe, M,, 1993, Kombinatie van diverse methoden van falingsvoorspelling: test met Belgische gegevens, licentiate's thesis, (Departement Toegepaste Economische Wetenschappen. K.U. Leuven).
Declerc, M,, Heins, B.; Van Wymeersch, C., 1992a, The Use of Value Added Ratios in Statistical Failure Prediction Models: Some Evidence on Belgian Annual Accou~its,Cahiels Econonziques de Bt~~selles 135, (3eme trimestre 1992), 353-378. Declerc, M., Heins. B.. Van Wymeersch, C., 1992b, Flux finaniers et prCvision de faillite: une analyse comportemelltale de l'entreprise. Cnhiers Ecoiio~niqiresde Bruselles 136: (4eme trimestre 1992), 416-443. Diercks, G.. 1992, Selectie van variabelen als input voor een lineaire discriminantanalyse, licentiate's thesis, (Departement Toegepaste Econolnische Wetenschappen, K.U. Leuveil). Espahbodi, P,, 1991, Identification of Problem Banks and Binaly Choice Models, Joririlal of Bnilkilig and Fiilailce 15, 53-71. Falbo, P,, 1991, Credit-Scoring by Enlarged Discrinlinaiit Models, Onzegfl I~~teri~atioi!alJolol~i.nal oJ'Mnlzngenzent Science 19, 4, pp. 275-289. Flury, B.. 1988, Comlnon Principal Components and Related Multivariate Models, (Wiley, New York). Foster, G., 1986, Financial Statement Analysis, 2nd. ed., (Prentice-Hall International). Gilbert, L. R., Krishnagopal, M,, Schwartz, K. B., 1990, Predicting Baillrruptcy for Firms in Financial Distress, .Ioz1nzaI of B~lsiliessFillonce oncl Acco~lilti~zg, 17 (spring) 1, 161-171. Gochet, W.. Srinivasan. V.. Siam. A., Chell. S.. 1993>Multi-Group Discriminant Analysis Using Linear Programming. Research Report 9305, (Department of Applied Economic Sciences, K.U. Leuven). Golnbola M,, Ketz, E.. 1983, A N o t e on Cash Flow and Classification Patterns of Financial Ratios? The Accountirzg Review 58, 1; 105-114. Gombola, M. J., Haskins, M. E.; Ketz, J. E., Williams, D. D., 1987, Cash Flow in Bankruptcy Prediction. Financial Managenzeni, (winter), 55-65. Hand, D. J., 1981, Discrimination and Classification, (Wiley, New York). Hand, D. J., 1986, Recent Advances in Error Rate Estimation, Pattern Recognirioil Letters 4, 335-346. Hannan, M. ' I Freeman, , J.; 1989. Organizational Ecology, ( H a n w d University Press). Jegers, M,, Buijink, W., 1983, De jaarrekeningen op magneetband, verspreid door de Balanscentrale: een kwantitatieve en kwalitatieve analyse, Accoz~r~tancy en Bedrijfsktinde Kwartaalschrzft 8, 1 , 10-36. Jegers, M,, Buijink, W., 1983, De jaarrekeilingen op magneetband verspreid door de Balanscentrale: vel-duidelijltingen,Accoz~rztanc~~ en Bedrijfsklollnde, Kivar~taalsch$t8, 2, 89. Jegers, M,, Buijink, W., 1987, The Reliability of Financial Accounting Data Bases: Some Belgian Evidence, Intemntioizal Jour~zalofAccounting, (fall), 1-21. Johnson, R. A., Wichern, D. W., 1992, Applied Multivariate Statistical Analysis, 3rd. edition, (Wiley, Prentice Hall). Joy, 0 . M,, Tollefson, J. 0 . . 1975, On the Financial Applications of Discriminant Analysis, Jozlrizal of Fii~ancinland Quaniiiutii,e Analysis, (Decen~ber),723-739. Kassab, J., McLeay, S., Shani, N., 1991, Forecasting Bankruptcy: Failure Prediction or Survival Analysis?, Paper presented at the 1991 Annual Coligres of the European Accounting Association. (Maastricht, the Netherlands), April 10-12. Keasey, K.. McGuiness, P, Short, H., 1990, Multilogit Approach to Predicting Corporate Failure-Further Analysis and the Issue of Signal Consistency, Onzega International Jololnnnl of M~~nngemellt Scielzce 18, 1. 85-94. Labro, E., 1992, Het gebruik van wiskundig-statistische modellen en expertsyste~nendoor Belgische banken bij de beoordeling van bedrijven, licentiate's thesis. (Departement Toegepaste Economisch Wetenschappen, 1C.U. Leuven). Lachenbruch, P A., Mickey, M. R., 1968; Estimation of Error Rates in Discriminant Analysis, Echizonzetrics 10, l, 1-11, Laitinen, E., 1993, Financial Predictors for Different Phases of the Failure Process, Onzega IizterrzafioniilJololirn~~l of n/lnizngemer~tScience 21, 2, 215-228. Luoma. M,: Laitinen, E. K., 1991, Survival Analysis as a Tool for Company Failure Prediction, Onzegn Intei.nnfionn1 Jozlrnnl of Mnnagemeizr Science 19, 6, 673-678.
Mahmood, M. A., Lawrence, E . C., 1987, A Performance Analysis of Parametric and Nonparametric Discriminant Approaches to Business Decision Making. Decisioi~Scieizces 18, 308-326. Meulemans, D., Van Acoleyen, M,, Flamee. M,, Merchiers, Y., 1992, Codes Boekhoudrecht 1992-93. (die Keure). ofFiiln~lce25. (SeptemMeyer, P. Pifer, H.. 1970. Prediction of Bank Failures, The Jo~irl~al ber). 853.868. N.A.c.E.. 1970, Algcmene systematische bedrijfsindeling in de Europese Gemeenschappen. (Bureau voor de Statistiek der Europese Gemeenschappen). Ooghe; H., Verbaere. E., 1985, Predicting Business Failure on the Basis of Accounting Data: the Belgian Experience, Iizter.i~utionniJolrrnal ofAccountil~g,(Spring), 19-44. Ooghe. H., Van Wymeersch, C., 1988, Financiele analyse van ondernemingen, Tlieorie en toepassing op de jaarrekening, Volume 1; (Stenfert Kroese). Ooghe, H., Joos, P,. De Vos, D., 1993. Risico-indicator voor een onder~lemingaan de hand e11 Bedrijfskunde, Kwartaalschrift 1S,3,3-26. van falingspredictiemodelie~~,Acco~/ntancj~ Peeters, M,. 1992, Beoordeling van bedrijven door middel van "risk of ruin" modeilen, licentiate's thesis, (Departement Toegepaste Economische Wetenschappen, K.U. Leuven). Pinches, G - E,, Millgo, I<. A.; 1973. A Multivariate Analysis of Industrial Bond Ratings. The Jo~lrrlalof Finance 28. 3, 1-18. Pinches, G. E., Mingo, K. A., Caruthers, J. K., 1973, The Stability of Financial Patterns in a l Finance 28, 2, 389-396. Industrial Organizations, The J o z ~ n ~ of Platt, H. D., Platt, M. B., 1990, Developmeni of a Class of Stable Predictive Variables: the Case of Bankruptcy Prediction, Joun~alof B~isinrssFilzcrizce al~dilccounting17, 1, 31-51. Ponnet, S., 1992. Beoordeling van bedrijven d.m.v. logitmodelien, licentiate's thesis, (Departement Toegepaste Economische Wetenschappen, K.U. Leuven). Rees, B., 1990, Financial Analysis, (Prentice Hall). Skogsvik. K., 1990, Current Cost Accounting Ratios as Predictors of Business Failure: the Swedish Case, Journal of B~lsinessFiizalzce anddccoluzting 17, 1, 137-160. Trocli, F., 1992, Beoordeling van bedrijven door middel van lineaire prograrnmatiemethoden, licentiate's thesis, (Departement Toegepaste Economisch Wetenschappen, K.U. Leuven). Van Landeghem, G., 1994, Maten voor de formele kwaliteit van Belgische jaarrekeningen, submitted for acceptance as a research report of the Department of Applied Economic Sciences, (K.U.Leuven). Vandingenen, A., 1994, Falbo's "Enlarged Discriminant Model" voor falingsvoorspelliilg: een test met Belgische gegevens; licentiate's thesis, (Departement Toegepaste Economische Wetenschappen, K.U. Leuven). Vlaamse Commissie voor Preventief Bedrijfsbeleid, 1993, Jaarverslag van de Vlaamse Commissie voor Preventief Bedrijfsbeleid over 1992, (Ministerie van de Vlaamse Gerneenschap. Vlaamse Collllnissie voor Preventief Bedrijfsbeleid). Vinso, J. D., 1979, A Determination of the Risk of Ruin, Jo~irizalof Financial aizd Qlrantitativr Aiznlysis 14, 1, 77-100. Wijn, M. F. C. M., 1988, Uittreden van industriele ondernemingen, Een analyse per bedrijfsklasse, doctoraal proefschrift K.U.Brabant, (Stenfert Kroese). Wood, D.. Piesse, J., 1988, The Information Value of Failure Predictions in Credit Assessment, .lourilnl oj'Baizking and Finance 12, 275-292.
Tijdschrift voor Economie Vol. XXXIX, 4,1994
Management
BOEKBESPREJSING
International Handbook of Participation in Organizations for the Study of Organizational Democracy, Co-operation and Self-Management Volume iii: The Challenge of New Technology and Macro-Political Change W. Lafferty and E. Rosenstein (eds) (Oxford University Press) 1993 Zoals wellicht meerdere lezers, heb ik de gewoonte om een nieuw boek eerst uitgebreid te doorbladeren, alvorens de eigenlijke lectuur aan te vatten. O p die manier maak ill me vertrouwd met d e grote lijnen van de inhoiid ewan. i n bit geval werd het me als snel duidelijk dat d e grote lijnen van het boek niet begrepen kunnen worden, zonder inzicht in een ruimere context. Het boek maakt immers deel uit van een reeks en moet in dit licht worden gelezen. In 1983 verscheen bij uitgeverij Wiley deel 1van het "International Yearbook of Organizational Democracy", gevolgd in 1984 en 1986 door een 2de en 3de deel. D e e l 4 verscheen in 1989 bij Oxford University Press. D e verandering van uitgever bracht ook een titelwijzingen mee, zodat dit boek opnieuw volume i werd geiioemd. Na deel5 (vol.ii - 1991) ligt dus nu deel6 (of vol. iii) hier ter besprekiiig voor. D e uiterlijke wijzigingen hebben echter geen invloed gehad op de inhoudelijke continui'teit van het project. Elk deel van de reeks kreeg ttvee editors mee en het geheel werd gesuperviseerd door Frank Heller, directeur van het Tavistock Instituut in Londen. Zoals d e lezer weet, heeft het Tavistock Instituut een belangrijke rol gespeeld in het ontwikkelen en verspreiden van verschillende vormen van werknemersparticipatie in bedrijven. D e reeks is erg ambitieus opgezet: zij streeft ernaar om de gei'nteresseerde lezer een overzichtelijke toegang te geven tot de grote diversiteit aan initiatieven inzake werknemersparticipatie, zonder hierbij aan diepgang in te boeten. Wat de redactie daarbij voor ogen heeft, is "a single reference source for scholars and policy-makers". In totaal werdeii hiervoor een 140 tal bijdragen bijeengebracht door auteurs uit 22 landen. Omdat het voorliggende boek het laatste is van de reeks, vindt de lezer op het einde een samenvatting terug van alle artikels die in de zes delen zijn verschenen. Hieruit wordt duidelijk dat de redactie haar ambitie in grote mate waargemaakt lieeft. D e omvang van het gehele project kan misschien best verduidelijkt worden door de verschillende rubrieken te overiopen, waaronder de individuele artikels zijn ondergebraclit. Ver~iitde grootste rubriek wordt gevormd door de "countly studies": in totaal34 case studies, waarbij ongeveer gans de gei'ndustrialiseerde wereld wordt bestreken. Concreet gaat het om alle Angelsaksische en Scandinavische landen, de meeste West-Europese landen, 7 Oost-Europese landen, 6 Aziatische landen en verder Chili, Peru, Isarel en Tanzania. D e enige opvallelide af-
wezigen zijn Spanje, ... en Belgie. Een andere belangrijke rubriek is deze waarin de auteurs een evaluatie van het domein maken (25 bijdragen). Recente theoretische ontwikkelingen en recente onderzoekbevindingen komen aan bod in respectievelijk 16 en 21 artikels. Interessant voor academici is de rubriek "landmarks revisited", waarin klassieke studies herbekeken worden in het licht van de nieuwe inzichtingen (14 bijdragen). Tenslotte werdeil ook enkele specificlie thema's behandeld, zoals training, participatie en technologie en de macro context van organisationele democratie. D e twec laatste thema's maken deel lit van het vooi-liggende boekdeel. Keren we terug naar de inhoud van het voorliggende boekdeel. D e lezer wordt daarbij meteen geconfronteerd met een ontstellende diversiteit van bijdragen, waarbij de indeling in rubrieken erg betrekkelijk blijkt en enige vorm van integratie op het eerste zicht afwezig. In dit geval is deze diversiteit echter belangrijk. Zij weerspiegelt de rijkdom van het domein zelf en d e complexiteit van de problematiek. El- is de verscheidenheid aail vormen van werknemersparticipatie: representatieve systemen zoals de Duitse Mitbestimmung of onze onderneiningsraden; directe participatie, zoals kwaliteitskringen en autonome teams; en financiele participatie, zoals winstdeling of her roeliennen van werknernersaandelen. E r is de rijkdom van maatschappelijke en historische contexten die telkens in belangrijke mate de concrete afspraken hebben vormgegeven en de verschillende disciplines die zich voor het ondenverp interesseren, met name de politologie, de sociologie, de organisatie-psychologie en de bedrijfseconomie. Iedere auteur benadert zijn ondenverp - noodgedwongen - vanuit zijn eigen historische en conceptuele achtergrond. Hierdoor krijgt de lezer uiteindelijk overzicht Cn inzicht. Hij geraakt geleidelijk doordrongen van de varieteit aan maatschappelijke en culturele wortels, van het moeizaam proces van regelgeving en afspraken, van de coinplexiteit van intermenselijke relaties. Dit is belangrijk in een domein waarin het grootste gevaar schuilt in een theol-etisch of ideologisch gelnspireerd simplisme. Het is een grote verdienste van de redactie om zoveel auteurs te vinden die een geloof in organisationele democratie als waarde wetell te integreren met een realistische en genuanceerde visie. Het boek telt zelf 22 bijdragen. Ik overloop ze kart. Het lste artikel handelt over maatregelen tegen discriininatie op de werkplek. D e auteur bespreekt de wettelijke beschildtingen in Canada en hun effect op de pralitijk in het licht van drie gekende veranderingsstrategieen in organisaties: top-down (bijvoorbeeld via regulering); organisatie-ontwikkeling (O.D.), gericht op eel1 culturele verandering; en een politieke benadering vanuit een 'stakeho1ders'-perspectieff Haar pleidooi voor het wegwerken van discriininatie tegeilover vrouwen, gehandicapten en minderheden, is goed onderbouwd ell zou in oils land we1 eens mogeil gelezen worden. Een 2de bijdrage vergelijkt de hoge vlucht van de ESOP's (Employee StockOwnership Plan) inet het beperkte succes van kwaliteit~k~ingen in de V.S. Het verschil in institutionele aanvaarding wordt verklaard v a n ~ ~het i t onderscheid tussell aanpassing (aan wijzigingen in de technologische omgeving) en legitimatie (binnen een verandereilde institutionele en culturele omgeving). Het 3de artiltel is een review van de klassielte Coch & French studies over weerstand tegeil verandering en de liritieken die hierop door andere auteurs zijn geformuleerd. D e auteur neeint hier de verdediging op zich van de stelling van Coch & French dat vooral het ervaren van dc inogelijkheid tot het ~litoefenenvan invloed leidt tot een beter omgaan met verandcring.
Recente theoretische ontwikkelingen vinden we in de volgende bijdragen. Hoofdstuk 4 maakt een onderscheid tussen de socio-dynarnische, motivationele en cognitieve componenten van participatie. D e auteur verklaai-t het soms we1 en soms niet gevonden effect van participatie op de performantie met behulp van tussenliggende variabelen. D e positieve invloed van participatie zal maar tot zichtbare effecteii leiclen in situaties waai- d e tussenliggende variabelen veel ruimte voor verbeteringvertonen. Wet 5de artikel is van Franse ~nakelij.D e auteur ziet participatie als eel1 vorm van regulatie in een vgereld gekenmerkt door dynalilische coniplexiteit. Tussendoor geeft hij een interessante analyse van het (mode)fenomeen "deregulering". D e rubriek "I-ecente empirische bevindingen" omvat 3 bijdragen. Hoofdstuli 6 tracht een samenvattiilg te geven van een vergelijkend oilderzoek over de invloed van werknemers op complexe beslissingen in organisaties in Nederland. Engeland en Slovenie. D e tekst is helaas nauwelijlts te volgen voor wie niet vertrouwd is Inet het oorspro~lkelijkerapport. Het 7de artikel is het enige waarin een bedrjfseconomische onderzoeksbenadering wordt herkend. Ik maakmij sterk dat een betere vertegenwoordiging mogelijk was geweest. D e conceptuele arinoede steekt schrii af tegenover dc nuancering en diepgang van de rest van het boek. Bovendien is de interpretatie van de resultaten hoogst betwistbaar. D e 8ste bijdrage brengt vei-slag uit "an een enqukte bij Britse werknenlers van bedrijven met CCn of andere vorm van profit-sharing. Werknemers staail hoofdzakelijk positief tegenover de betreffende programma's, maar venvachten weinig effect inzake dagelijkse werking van het bedrijf. Eigenaarschap leidt dus niet automatisch tot betrokkenheid (involvement) of werknemerscontrole. Daarbij zijn we aaiibeland bij de landenstudies. In dit volume vindt de lezer een bespreking van het wettelijk verplichte systeem van profit-sharing in Frankrijk, een analyse van de weinig benijdenswaardige situatie van de Zuid-ICoreaa11se werknemer en een verslagvan de ontwikkelingen in Bulgarije en Rusland. Het is mei-kwaardig dat in deze laatste landen er een beweging aan de gang is om managers meer invloed te geven in het bedrijf. Onder het vroegere regime werden belangrijke beslissingen immei-s genomen door elementen uit de partij. Het hoofdstuk over het herilieuwde tripartisme in Nederland valt te licht uit en mag gerust worden overgeslagen. E h van de speciale thema's in dit boek is de band tussen participatie en technologie. Deze rubriek telt 4 bijdragen. In het 14de artikel bevestigt het MIT haar faain o p c o ~ ~ c e p t u een e l empirisch niveau. Zich baserend op hun ruime ervaring, 0.m. in de automobielindustrie toilen cle auteurs aan hoe werk~lelnersparticipatie, liieuwe produkticmethoden en de invoering van liieuwe technologieen inaar tot wezenlijlie resultaatsverbeteringeil leiden, naarinate ze geintegreerd worden tot werkelijk nieuwe wijzen van organiseren. Hoofdstuk 15 beschrijft beknopt een progranima van Noord Rijnland-SVestfalen rond de implementatie van nieuwe technologieen. De 3 projecten die wordcn aangehaald als voorbeeld: zijn erg gcricht o p participatie. D e bespreking is echtei- algelneen en eerder sloganinatig. D e 16de bijdrage vertrekt van eeii case studie van een Zweeds colnputerbedrijf (consulting), dat ogenschijnlijk gerund wordt op basis van gemeeiischappelijke waarden en symbolen. D e auteur- bespreekt hoe het streven naar eel1 familiale en kameraadschappelijke sfeer zowel participatie bevordert als verhindert. Hiertoe ontwikltelt hij de notie van pseudo-participatie. Het laaste artikel in deze rubrick is geschreven vanuit een Europees perspectief. D c auteur beschrijft 0.m. een onderzoek van de European Foundation bij werkgevers en werknemers en
betoogt dat de eensgezindheid over en de mogelijkheden van participatie bij het invoeren van nieuwe technologieen, veel groter zijn dan het Europese debat over sociale dialoog laat vermoeden. D e laatste rubriek verzamelt een vijftal teksten rond de maatschappelijke context van werknemersparticipatie. Hoofdstuk 18 analyseert de plaats en de vorm van het corporatisme in d e huidige samenleving. D e auteur neemt daarbij geen standpunt in, tenzij dat het begrip nog lang niet dood is. D e 19de bijdrage handelt over het inpassen van een ESOP in een normale kapitaalmarkt. Kapitaalsallocatie en risico-allocatie worden bekeken zowel in een gesloten als in een open markt (beurs). Het volgende artikel vergelijkt het streven naar industriele democratie in Zweden en Noonvegen. D e auteur betoogt dat het zgn. Scandinavisch model niet zo homogeen is, als meestal geloofd wordt. Hij brengt daarvoor ook redenen aan, die hoofdzakelijk terug te brengen zijn op macro-economische en politieke verschillen tussen beide landen. Hoofdstuk 21 handelt over de situatie van de Canadese cooperatieven, hoofdzakelijk in de financiele sector, d e kleinhandel en de landbouw. D e moeilijkheden die deze organisaties o p dit ogenblik kennen en hun twijfel om aan te kloppen bij de Staat om steun, zijn een aanleiding voor de auteur oiii uiigebreib in te gaan op de relatie tussen Siaai en indiistrie. Hoofdstuk 22 tenslotte beschrijft de ervaring met d e Work Environmental Act in Noonvegen. Hierdoor werden d e traditionele overlegstructuren gestimuleerd tot lokale milieu-initiatieven, maar het succes ervan was kleiner dan verwacht. D e auteur wijdt dit vooral aan een cultuur, zowel bij management als bij vakbond, die gericht is o p onderhandeling en niet op delegatie en toont met behulp van een succesvolle case wat uit deze Noorse ervaring kan geleerd worden. Dit boek is een rijk naslagwerk voor a1 wie gei'nteresseerd is in werknemersparticipatie, zowel aan vakbonds- als aan werkgeverszijde. H e t niveau van de bijdragen ligt meestel erg hoog. Bovendien is het uiterlijk verzorgd en gebruiksvriendelijk (de overzichtelijk afgedrukte lijst met afkortingen is in deze context helemaal geen luxe). Het is nu maar te hopen dat ik een bibliotheek zover krijg dat ze de ganse reeks aankoopt. Bert OVERLAET K.U.Leuven
Strategic Precision Improving Performance ~ h r o u g hOrgan~izationalEfficiency B. Karlof (Wiley & Sons/Chichesten; 1993) Dit boek richt zich tot top- en middenkaders en studenten die een bedrijfseconomische opleiding genieten. H e t tracht deze (toekomstige) managers een reeks technieken en instrumenten aan te reiken voor het analyseren van de efficientie van d e ondernemingsactiviteiten. Aldus wordt een beter inzicht venvorven in de bronnen van een concurrentieel voordeel en ltunnen actieplannen ontwikkeld worden om de ondernenlingsprestaties te verbeteren. Het boek bevat vier hoofdstukken, maar dit hadden er maar drie mogen zijn. Ook bij de indeling en de inhoud van de hoofdstukken dienen vraagstekens geplaatst te worden. In het eerste hoofdstuk wordt vooreerst het begrip efficientie verduidelijkt. Terecht wordt beklemtoond dat het refereert naar het evenwicht tussen d e waarde
gecreeerd vanuit het oogpunt van de afnemer en de produktiviteit waarmee deze goederen en/of diensten werden voortgebracht. Dit betekent dat Karlof's definitie niet overeensternt met de traditionele omschrijving van efficientie waarbij enkel het produktiviteitsaspect wordt beklemtoond. Verder worden in dit hoofdstuk de concepten schaal- en ervaringsvoordelen gei'ntroduceerd. Ten eerste is het nog lnaar de vraag of deze begrippen hier a1 aan bod moeten ltoinen, vermits het derde hoofdstuk dieper ingaat op de kostenaspecten verbonden aan Karlijf's efficientie-begrip. Ten tweede dienen er ook kanttekeningen geplaatst te worden bij de ~nanierwaarop deze concepten worden gehanteerd. Ze worden enkel oppervlaltkig toegelicht, uiteraard aangevuld niet de "klassieke" illustraties. H e t kolnt over als een herkauwen van wat ondermeer Abell en H a m ~ n o n dals in 1979 presenteerden, met dit verschil, dat iedere keer dezelfde materie herkauwd wordt, ze ineer velwatert. H e t tweede hoofdstuk is naar mijn lnening totaal misplaatst en bijgevolg overbodig. Immers, hoewel de auteur zich tot doe1 stelt om technieken aan te reiken oin het concurrentieel voordeel van een onderneming te analyseren, en het zich liiermee duidelijk op het niveau van concurre~itiestrategiepositioneert, wordt in dit hoofdstuk de aandacht verplaatst naar het concernniveau. Het richt zich op het formuleren van portfoliostrategieen. D e traditionele concepten als d e "BCG growth-sharef'-matrix.en de afgeleiden hiervan ontwikkeld door A.D.Little, General Electric en McIGnsey, worden tot in den treure herhaald. Dit terwijl ze, gegeven het thema van het boek, hier niet ter zalte zijn. Daarenboven werken het ongenuanceerd en foutief gebruik van begrippen als portfolio, dochteronderneining, divisie en diversificatie en de veelvuldige overlappingen en herhalingen uitermate storend. D e laatste twee hoofdstukken gaan dieper in op de twee coinponenten van Karlof's efficientie-begrip. enerzijds de kosten- en de produktiviteitsaspecteil en anderzijds de waardecreatie vanuit het perspectief van de klant. In het derde hoofdstuk worden een aantal methoden toegelicht om de kosten(structuur) in een onderneming te analyseren en beter te monitoreren. D e voorgestelde technieken zijn ondermeer "Activity Based Costing", "Time Based Competition" en -nogmaals - het detecteren van ervaringsvoordelen. Bovendien wordt er aandacht besteed aan methoden die de relatieve kostenpositie van een business unit trachten te bepalen door de kostencomponenten te vergelijken met een specifieke interne of externe standaard. Indien deze standaard intern vastgesteld wordt, door het vergelijlten van gelijltsoortige activiteiten tussen divisies onderling, worden deze technieken "Best Demonstrated Practices" genoemd. Wanneer de vergelijkingsbasis bepaald wordt door de activiteiten te toetsen aan de produktiviteit waarlnee directe concurrenten dezelfde activiteiten verrichten, wordt er gesproken van "Relative Cost Positionu-methoden. In het laatste hoofdstukwordt aandacht besteed aan technieken die inzicht verschaffen in de segmentatie van de markt en de preferenties van (potentiele) klanten. Immers, een efficiente ondernelning is, volgens de definitie van Karlof. een bedrijf dat erin slaagt oin met "strategische precisie" de behoeften van haar klanten te kennen om hierop haar beschikbare middelen en bekwaamheden beter af te stemmen. D e voorgestelde methoden zijn factor-, cluster- en conjointanalyse, vertrouwde analysemethoden in marlttonderzoek. Elk van de besproken methoden in het derde en vierde hoofdstuk wordt beknopt beschreven en gei'llustreerd met voorbeelden. Hoewel hieruit blijkt dat elk van deze technieken zeker behulpzaam kan zijn bij het ontleden van de produk-
tiviteit van de ondernemingsactiviteiten en het bestuderen van de reele behoeften van de (potentiele) afnemers, valt het ten zeerste te betwijfelen of dit overzicht een manager we1 voldoende houvast beidt om daadwerkelijk aan de slag te gaan. Hiel-vool-is de beschrijving te su~nmieren bijgevolg oppervlakkig. Samengevat, dit boek is koren op de molen van academici, bedrijfsleiders cn studenten die het vakdornein strategisch management nog niet erkennen als een volwaardige discipline. AI hun vooroordelen worden bevestigd: strategisch managelllent steunt op holle slogans, heeft weinig oorspronkelijks, raapt sainen en herkauwt begrippen en conceptendie hun oorsprong vinden in bedrijfs- en industride econornie. Erger nog, de bedrijfleider die geconfronteerd wordt met tanende ondernemingsresultaten vindt hierin gt%n in~plementeerbareinstrumenten on1 zijn onderneming grondig te analyseren. Het boek mist diepgang en laat de manager in d e kou staan, middcn van zijn problemen. Hij had zijn tijd beter andevs besteed.
Tijdschr~ftvool- Economie en Management Vol. XXXIX, 4,1994 EINDVERHANDELINGEN
De ecolaornische theorie van staGngen WLLEICENS Fsanky (K.U.Leuven, Eicentiaat E.W. (1994)) Waarom ontstaan stakingen? H e t is de gewoonte oln stakingen te classificeren volgens het voornaanlste disc~~ssiepunt: lonen, arbeidsomstandighedq ontslagregelingen, .... Maar dit is een weinig zi~lvolleclassificatie. Belangrijker is de vraag waaroln de betrokken partijen niet in staat zijn eel1 overeenkomst te bereiken over het discussiepunt. M.a.w. om de "oorzaken"van stakingen te ontdeklien, 011derzoekt de economische theorie de omstandigheden waarin een overeenkolnst tussen partijen onmogelijk is. D e econo~nischetheorie heeft echter moeite om het optreden van stakingen te verklaren omdat ondei-handelingspartijen een incentief hebben om een staking te vermijden. Een staking brengt namelijk kosten mee voor beide partijen: de werkgever verliest output en de werknelners verliezen hun loon. Stakingen zijil bijgevoig Pareto-inferieur. In de economische stakingsliteratuur staat dit bekend als de Hicks-paradox. John Icennan (1986) beschrijft de paradox als volgt: "... if one has a theory which predicts when a strike will occur and what the outcome will be, the parties can agree to this outcome in advance, and so avoid the costs of a strike. If they do this, the theory ceases to hold .... If the parties are rational, it is difficult to see why they would fail to negotiate a Pareto optimal outcome." Omdat beide partijen verliezeil bij eel1 staking, is het moeilijk om een model met rationele agenten te construeren waarbij de oplossing of d e evenwichtssituatie een staking is. Hicks (1932) zelf zag het merendeel van de stakingen als het gevolg van onvolledige informatie van de vakbond over d e positie van de werkgever. D e vakbond zal gegeven haar infornlatie de onderhandelingen aanvatten met hoge eisen. 111dien d e vakboild tijdens de onderhandelingen haar eisen niet of onvoldoende aanpast aan de gereveleerde houdingvan de werkgever, ontstaat een staking: de valiboildsafgevaardigden hebben gefaald (verkeerd en/of o~ldesku~ldig onderhandeld met de werkgever). Bij "adequate knowledge" of volledige infomatie zou een staking echter nooit voorkomen. Volgens Hicks zijn stakingen dus misversta~ldendie gecorrigecrd of vermcden k ~ i n n e nworden dooi- "adequate knowledge" te introduceren in het onderhandelingsproces. Mauro (1982) forinaliseerde Hicks' argumentering en gaf bovendien redenen voor de afwezighcid van volledige informatie. Maar Mauro beliomt weinig overtuigende empirische evidentie voor hct model. Eens de Hicks-paradox erkend werd, volgde de stakingsliteratuur verschilllende wegen die men kan samenbrengen ender drie denkpistes. E e n eerste denkpiste versoepelt de neoklassieke rationaliteitsassumptie. Werkgever en valibo~idsleidingzouden op een rationele manier steeds tot eel1 overeenkomst komen zonder te staken. Maar het politieke model van Ashenfelter en
Johnson (1969) en van Farber (1978) wijst op een derde betrokken partij: de vakbondsleden. Deze handelen irrationeel en hebben daarom dikwijls onrealistisch hoge venvachtingen over het onderhandelingsl-esultaat. 0111politieke redenen (vrees voor een verlies aan leden of stemmen bij de volgende vakbondsafgevaardigdenvcrkiezingen) neemt de vakbondsleiding de eisen van de leden bij de onderhandelingen met de werkgever. Daardoor worden stakingen mogelijk. D e conclusie van deze denkpiste is dat stakingen inherent zijn aan collectieve onderhandelingen: stakingen gebeuren zonder meer ten gevolge van de structurele (politieke) kenmerken van het onderhandelingsproces. Bijgevolg zijn ze niet te vermijden in tegenstelling met de misverstanden in het model van Hicks. Een tweede denkpiste beschouwt stakingen als "accidents" of fouten tijdens onderhandelingen. Maar de kans op een fout wordt rationeel bepaald door de onderhandelingspartijen. Ofwel door het onderhandelingsgedrag (Siebert en Addison (1951)), ofwel door het kader waarbinnen onderhandeld wol-dt aan te passen (Reder en Neumann (1980)), beinvloeden de negocierende partijen de stakingskans. Volgens Siebert en Addison zijn stakingen vergelijkbaar met verkeersongeval!en. Ahocwc! c!k oiigeval o i ~ ~ o o r z i e(of i i het gevolg van eeii foiitj is, is de ltans op een ongeval het resultaat van een rationele keuze: een autobestuurder kiest voor een gegeven route een optimale snelheid waarlnee een tijdsbesparing en een aannemelijke ongevallenkans correspondeert (Peltzman (1975)). Evenzo maximeert een onderhandelaar zijn netto-inkomen door een onderhandelingsduur en een overeenkomende looneis en stakingskans te lciezen binnen eel1 gegeven onderhandelingskader. Volgens Siebert en Addison is imperfecte informatie een noodzakelijke voorwaarde voor fouten of "accidents" (cfr. Hicks). Maar "accidents" zijn hier geen voonverp van correctie zoals de misverstanden in het model van Hicks. "Accidents" zijn een berekend risico, teiwijl misverstanden terloops gebeul-en. Reder en Neumann sluiten met hun "joint cost"-mode! aan bij de ongevallenhypothese van Siebert en Addison. Maar, in tegenstelling met het vorige model. is hier de institutionele context waarbinnen onderhandeld wordt het resultaat van een gezamenlijke keuze en niet rneer exogeen. De partijen kiezen nameljk eel1 wederzijds aanvaardbaar onderhandelingsprotocol. Dit protocol specificeert bijna volledig de te volgen procedures voor collectieve onderhandelingen. Het ltan zo ver gaan dat de onderhandelingen gereduceerd worden tot het toepassen van een formule. In de praktijk is er echter ruimte voor oneiligheid en stakingen. Reder en Neumann ccncluderen dat de stakingsincidentie tegengesteld met de gezamenlijke stakingskosten van de betrokkcn partijen moet varieren. Z e veronderstellen immers dat de onderhandelingspartijen een meer uitgebreid protocol zullen kiezen als hun stakingskosten hoger zijn (m.a.w. als ze meer te verliezen hebben). e de ontwikkelingen in d e spelBegin jaren tachtig ging de literatuur ~ n e d door theorie een derde richting uit. Deze derdc denkpiste ziet een staking als een instrument om informatie te achterhalen in een olngeving nlet private-informatie. Meer specifiek bezit de werkgever in deze modellen private informatie over zijn eigen winstgevendheid, telwijl de vakbond de staking gebruikt om hierover meer te weten te komen. Hayes (1984) p~~bliceerde ais eerste een dergelijk model met private informatie. In haar model is het onderhaildelillgsproces tussen de betrokken partijen expliciet gemodelleerd. Voordien werd dit in bijna alle modellen vermeden (wat door vele auteurs als een serieuze tekortkorning wordt beschouwd).
Uitgaande van een rationeel onderhandelingsproces suggereert het model van Hayes dat stakingen eerder voorkoinen bij weinig winstgevende ondernemingen ("low state") dan bij winstgevende ondernemingen ("high state"). Veronderstel dat een onderneming pretendeert niet of weinig winstgevend te zijn. D e vakbond heeft geen reden om dit te geloven. Vandaar eist ze een hoog loon. Een liegende onderneming (die in werkelijkheid we1 winstgevend is) heeft echter meer te verliezen bij een staking dan een waarheidssprekende onderneming. Daaroin is een onderneming in "high state" (vlugger) bereid om het hoge loon te aanvaarden. Voor een onderneming in "low state" ligt het anders. Deze kan enkel bewijzen dat ze werkelijk weinig winstgevend is door een staking en de daarmee samenhangende kosten te ondergaan. Gegeven dat de winst werlcelijk laag was, zouden beide partijen beter af geweest zijn door onmiddellijk het lage loon te aanvaarden. Dus ex post is de staking niet Pareto-optimaal. Maar ex ante weet de vakbond dat de onderneming kan liegen, tenvijl een onderneining in "low state" weet dat het enkel de vakbond kan overtuigen door een staking aan te gaan. H e t model van Hayes en de overige (speltheoretische) modellen met private informatie tonen aan dat stakingen het resultaat kunnen zijn van rationeel gedrag van beide onderhandelingspartijen. E r is geen sprake van een misverstand of misrekening. Enkele a~lteurszien in deze (speltheoretische) modellen met private informatie het eerste volledig consistente antwoord op de Hicks-paradox. Maar deze modellen kennen duidelijk een probleem op empirisch vlak. Want - meer dan in de vroegere modellen - is een deel van de relevante data niet observeerbaar en dienen ter benadering "proxies" te worden gebruikt. Daarom valt het nog te bezien of d e recente nlodellen met private informatie de empirische testen meer succesvol zullen doorstaan dan hun voorgangers.
Doorstroming aan die universikeit
DEMARSIN Bruno (K.U. LEUVEN, Licentiaat E.W. (1994)) Sedert haar ontstaan streeft de mensheid naar kennis. D e Griekse wijsgeren deden dit om de kennis zelf. Sinds d e moderne tijd echter, is men kennis gaan beschouwen als een noodzakelijk produkt voor vooruitgang. Z o ontstond naast de traditioneel aanvaardde produktiefactoren kapitaal. land en arbeid, een nieuwe factor in de economische theorievorn~ing,nl. het menselijk kapitaal. Een moderne definitie van deze kapitaalvorm vinden we bij Blaug (1991). Hij definieert menselijk kapitaal als "de actuele waarde van de voorbije investeringen in vaardigheden en kennis van dit individu". Het beschouwen van nlenselijk kapitaal als een produktiefactor leidt tot verschillende interessante inzichten. Ten eerste blijken theorieen over econonlische groei in sterke mate aan verklarende waarde te winnen door het toevoegen van menselijk kapitaal aan de produktiefunctie. Een mooi voorbeeld hiervan wordt gevonden bij Mankiw, Romer en Weil(1992). Hun empirische resultaten blijken aan te tonen dat het Solow model uit de groeitheorie door toevoegingvan fysisch en inenselijk kapitaal toch juiste voorspellingen kan genereren en dus niet als zo-
danig venvorpen dient te worden. Ten tweede kunnen fenomenen zoals gezondheidszorg, migratie van arbeid en het volgen van ondenvijs beschouwd worden als investeringen in menselijk kapitaal. Venvacht wordt dat deze investeringen zullen leiden tot een verhoging van d e produktiviteit van het individu dat de investering doet en dus van zijn of haar toekomstige inkomensstroom. D e efficientie van een specifieke investering in menselijk kapitaal kan op verschillende manicren gemeten worden. Bij het ontstaan van de inoderne theorie van het menselijk kapitaal in het begin van de jaren zestig werd veel gebruik gemaakt van leeftijds-inkomens profielen. Dit zijn, zoals blijlit uit de naamgeving, diagrammen waarin men de evolutie van het inkomen uitzet t.o.v. de leeftijd. Recenter is men ook bij de analyse van investeringsanalyse gebruik gaan nlaken van kosten-batenanalyses'. Voor een juiste toepassing van deze methode is het bescllikken over correcte informatie onontbeerlijk. Verder in de tekst zal dan oolc getoond worden hoe een individu informatie kan verkrijgen over de risico's van investeringen in hoger ondenvijs. Alvorens dit te doen dient nog even gewezen worden op enkele andere functies van onderwijs. Ondenvijs blijkt immers meer te doen dan louter de produktivite~tte verhogen. In de literatuur wordt ook vaak venvezen naar d e socialiserende functie van ondenvijs. Z o proberell Gintis en Bowles (1976) aan t e tonen dat het huidige onderwijssysteem mensen vormt opdat ze zouden voldoen aan de eisen die door de kapitalistische maatschappij gesteld worden. Daarnaast heeft ondenvijs ook een belangrijke filnctie als selectiecriterium voor de werkgevers. Volgens Arrow (1973) hebben werkgevers initieel immers geen andere informatie over de kandidaat werknemers dan hun opleidingsniveau. H e t bereikte niveau zal dan ook de exclusieve basis vormen bij de selectie en de bepaling van de loonhooete. " Het generere11 van de noodzakelijke informatie voor beslissingen over investeringen in hoger ondenvijs kan gebeuren door ~niddelvan een analyse van de doorstromiiig aan de verschillende instellingen voor hoger ondersvijs. Met de term doorstroming wordt in dit kader venvezen naar het ingewikkeld proces van inschrijven, slagen, herkansen en de instelling verlaten rnet of zonder diploma. Ellie student maakt C6n of meerdere fasen van dit proces door. D e ideale methode om het doorstromingsproces te bestuderen bestaat erin gebruik te maken van een "split-hazard combined with logit"-model. Wegens de te hoge techniciteit werd deze methode in twee delen opgesplitst. D.m.v. de logit-analyse2 werd nagegaan welke persoonsgebonden karakteristieken de kansen op het behalen van een diploma be'invloeden. D e hazard-analyse3 daaraantegen werd gebruikt o m te bepalen hoe lang een student met bepaalde karakteristieken aan de bewuste instelling studeert. Men heeft dan enltel nog nood aan informatie over de kostprijs van het verblijf aan de universiteit en de baten van het behalen van een bepaald diploma om tot een volledige kosten-batenanalyse te kunnen komen. Het toepassen van deze methodes op gegevens van generatiestudenten aan de faculteiten Toegepaste Wetenschappen (T.W.) en Ekonomische en Toegepaste Economische Wetenschappen (E.Tl3.W.) van dc K.U. Leuven leidde tot enige interessante bevindingen. Een eerste conclusie is dat de bestaande voori-aad nlenselijk kapitaal, beschreven door de voordien gevolgde opleiding en de tijdens die opleiding behaalde resultaten, veruit de belangrijkste deteminant is 17oor het succes van investeringen aan onze universiteit. Een tweede vaststelling die we konden makcn, is dat het ingangsexamen aan de faculteit T.W. een duidelijke vermindering van het (rela-
tiei') aantal ~ n i s l ~ ~ k k i n gmet e n zich meebrengt. Hieruit mogen echter geen overhaaste conclusies getrokken worden. Theoretisch is het immers mogelijk dat ook een Cenjarig verblijf aan de universiteit voldoende produktiviteitseffecten genereert om d e kosten van de investering te rechtvaardigen. Tenslotte bleek dat hoewe1 er een licht effect is van factoren zoals geslacht en geografische afkomst; het selectiemechanisme aan de Leuvense universiteit eerder neutraal is t.0.v. de karakteristieken waar een individu geen invloed op kan uitoefenen. Daarenboven komt slechts een relatief klein gedeelte van de generatiestudenten uit arbeiderskringen. Deze twee bevindingen doen het vermoeden rijzen dat reeds voor de universiteit een belangrijke selectie wordt doorgevoerd en dat voor de jongeren uit deze kringen de stap naar de universiteit nog steeds te groot is. Indien ingrijpen vanwege d e overheid wenselijk zou worden geacht, zou dit zich dan ook best richten op een verdere democratisering van het universitair ondenvijs en op de correctie van eventuele negatieve selectie tijdens d e vooropleiding. Deze resultaten doen op hun beurt een hele reeks vragen rijzen die, binnen het kader van een verhandeling niet konden worden opgelost. Hopelijk vormen zii echter een aanzet tot verder onderzoek in dit domein. NOTES 1.Zie Blaug (1991) voor een overzicht van deze en andere technieken voor de analyse van investeringen in menselijk kapitaal 2. Zie Maddala (1983) voor een uitvoerig overzicht van de logit-analyse 3. Voor een kort overzicht van deze methode venvijzen we naar Vanhuele e.a. (1993) REREFERENTIES Arrow, K.; 1973, Higher Education as a Filter, .lozirnnl of Public Economj) 2, 3. 193-216. Blaug, M,, 1991, An Introduction to the Economics of Education, (Billing & Sons Ltd., New York). Bowles. S.: Gintis, H., 1976; Schooling in Capitalist America -Educational Reforin and the Contradictions of Economic Life, (Routledge and Kegan Ltd., London). Maddala, G.S., 1983, Limited Dependent and Qualitative Variables in Econometrics, (Cambridge University Press, New York). Mankiw, G, Romer, D., Weil, D., 1986,A Contribution to the Empirics of Economic Growth, Quarterly Jourilnl of Econonzics 107, 407-438. Vanhuele, M,, Dekimpe, M.G., Sharma, S.; Morrison; D.G., 1993, Probability Models for Duration: the Data don't Tell the Whole Story, Onderzoeksrapport 9308, (Departement Toegepaste Ecollomische Wetenschappen, K.U.Leuven).
I. SAMENVATTINGEN DOCTORAATSVERHANDELINGEN ABSTRACTS DISSERTATIONS Vertical and Horizontal Category Structures in Consumer Decision Making: the Nature o f Product Hierarchies and the Effect of Brand Qpicality
Christel Claeys Begrijpen hoe consumellten merken classificeren in produktcategorien is belang1-ijkvoor onderzoekers in consumentengedrag en voor inarketeers. Dit proces leidt immers niet alleen tot de identificatie van het inerk maar ook tot veel inferenties erover. Inzicht in het categorisatieproces is eveneens nuttig bij het bepalen van marlitgrenzen. Wij hebben het probleem van merkenclassificatie benaderd vanuit een categorisatie-theoretisch perspectief. We hebben de theorie van Eleanor Rosch gebruikt om de aard en de structuur van produktcategorieen nader te onderzoeken. Onze empirische studie toont aan dat produktcategorieen voorgesteld kunnen worden als hierarchische structuren met drie niveaus van abstractie: produktklasse, produkttype en produktvarieteitlgemerkte produktvarieteit. Het niveau van het produkttype fungeert als het "basis" of meest belangrijke niveau. In tegenstelling met d e venvachtingen blijkt het merkniveau niet onderscheiden van dat van de produktvarieteit. Wij hebben aan de theorie van Rosch eveneens het concept "typicaliteit" ontleend. Wij hebben onderzocht of typicaliteit nuttig is om de herinnering, d e evaluatie en de keuze van merken uit te leggen. In een eerste experiment tonen wij aan dat een initiele blootstelling aan zeer typische of aail atypische inerken de herinnering en evaluatie van concurrerende merken beinvloedt. Dit resultaat wordt bewerkstelligd in interactie met het aantal merken dat vooraf gezien wordt. Een tweede experiment toont aan dat de keuzeprobabiliteit van een typisch merk aanzienlijk vergroot wanneer een initiele toegang tot de produlitcategorie in het geheugen verschaft wordt; bv. wanneer men geconfronteerd wordt met de naam van het produkt vooraleer eel1 keuze op merkniveau te maken. Wanneer de toegang tot de categoric verkregen wordt door, vooraleer de keuze op merkniveau te bevragen, een keuze tussen produktcategorieen aan te bieden verhoogt de keuzeprobabiliteit van typische merlieil alleen indien de produktcategorie intern holnogeen is.
The way in which consumers classify brands into product classes is a main concern to consumer researchers and marketers. The outcoine of this process not only identifies the brand but results in category-based inferences. In addition, product categorization serves the goai of establishing the market boundaries. We have approached the issue of brand classification into coherent product categories from a categorization theoretic point of view. We have adopted the theory of Eleanor
Rosch to explore the nature and structure of product categories. Our empirical study demonstrates that product categories can be represented by three-tier hierarchies consisting of a product-class, a product-type assuines the status of "basic" or most important level of abstraction. Contrary to expectations, the brand level is not subordinated to the level of the product-model. In addition, we have borrowed from Rosch's categorization model the concept of typicality. WC have explored whether typicality is critical to explain the recall, evaluation and choice of brands. In a first experiment, we de~nonstratethat prior exposure to highly typical 01- to atypical brands affects the subsequent recall and evaluation of competing brands in the category. In producing this result, typicality is not predictive in its own right but interacts with the number of bl-ands shown a priori. A second experiment denlonstratcs that the choice probability of typical brands increases significantly if the access to the product category in memol-y is facilitated, i.e. when one is exposed to the category name prior to making a brand choice. When such access is obtained by presenting a choice between prod~lctcategories prior to a choice between brands within one of them, the choice liltelihood of typical brands is positively affected in homogeneous product categories m!y.
The Optimal Monitoring Policies for some Stochastic and Dynamic Production Processes Chen Shaoxiang
In this dissertation. two stochastic and dynamic production problems are studied. The first one is the capacity contrained one-product, periodic-review inventory system with a fixed (as we1 as a variable) cost and stochastic demands. The research in this paper aims to close the question of whether the modified (s,S) policy is optimal to the system. The second problem investigated comes from the inspection of multicharacteristics components in quality control. It serves to solve a real life "bottom line" problem. The one-product, periodic-review problem is one of the most basic models in production and inventory coiltrol theory. This paper shows with counter examples that generally the modified (s,S) policy is not optimal to the problem, either of finite or of infinite horizon, if there is a capacity constraint and a fixed set up cost. Yet, the optimal policy does hold a systematic pattern - an X - Y band structure: whenever the inventory level drops below X, order up the capacity; when the inventory level is above U,do nothing; if the inventoly level is between X ailcl Y, however, the ordering pattern is different from problem to problem. By exploiting the X - Y band structure, the caIculation for the optimal policy call be greatly reduced. Inspection for multicharacteristic components is an important means to assure product quality. A component is tested (inspected) with respect to its characteristics of which non-conformance of one would result in the rejection of the component. However, the need for inspection should also be justified in terms of the costs involved. Since the defective rates, the testing errors (Type I, 11); and the inspection costs are different across characteristics. not all the tests are cost effective to be conducted, while some of them lnay be justifiable to be executed
mol-e than once. How to determine optimally the sequencing and frequency of the tests? Based on the theorems established in this paper, an efficient algorithm for finding the optimal inspection plan is developed. Extensions are made and a real life application is reported.
Approximate Parametric Analysis and Stndy of Cost Capacity Management of Computer Configurations Dirk Ovelweg
Kapaciteitsbeheer van computers omvat het beheer van apparatuur, van (systeem)-programmatuur en van d e kosten met als doe1 het niaximaliseren van de ekonomische opbrengst voor de gehele organisatie. In deze eindverhandeling worden een aantal technieken die hieivoor nuttig zijn ontwikkeld. In deel 1 wordt een benaderende parametrische analyse voorgesteld voor het bestuderell van het gedrag van cornputerkonfiguraties in dynamische omgevingen. Deze techniek is gebaseerd op het gebruik van gesloten netwerken van wachtrijen als analytische modellen van computers. Deze techniek is praktisch rneer bruikbaar dan de bestaande technieken. In d e e l 2 wordt het kostenaspekt behandeld. Het formaliseert eerst het BSW mode1 van een informatie-afdeling. Vervolgens wordt aangetoond hoe het gebruik van "activity based costing" d e vertekeningen die optreden bij het toewijzeil van computerkosten volgens de meer traditionele toewijzingstechnieken, kan verminderen.
Computer capacity management involves hardware management, workload management and cost management in order to produce the maximum ecollonlic beneTit for the organization. In this dissertation, techniques that may provide valuable help for these three ~nanagementprocesses are developed and discussed. In part 1 of the dissertation an approximate parametric analysis technique is presented which may be helpful in managing hardware and wol-kload. This technique, called the approximate scenario-generating approach, may be useful when studying the behaviour of computer configurations under changing hardware or workload circumstances (e.g. when a new processor is added or when the inultiprogramming mix is altered). This technique is based on the use of closed queueing networks as models of computer configurations. It tries to strongly enhance the practical usefulness of the exact scenario-generating approach which in general is superior to the scenario-driven approach. Part 2 deals with the cost inanageiilent issue. It discusses the BSW model of an information department and its use in the capacity management process. Most attention is given to its usefulness for cost management. More in particular, it is shown how the introduction of activity based costing can reduce the computer cost allocation distortions resulting from the Inore traditional cost allocation techniques.
Equality of Opportunity and Investment in Human Capital
Dirk Van de GaeiDit werk is gebaseei-d op het standpunt dat de kansen die een sainenleving biedt aan haar kinderen essentieel zijn om te koinen tot een evaluatie van die samenleving. Idealiter zouden alle kinderen gelijkwaardige opties moeten hebben: ze zouden hetzelfde welzijn moeten kunnen realiseren. Bestaande sociale processen bieden echter ongelijke kansen aan kinderen van verschillende sociale afkomst of met verschillende genetische geaardheid. Daarom wordt in het eerste hoofdstuk een criterium verdedigd dat ongelijkheid van kansen die te wijten is aan deze twee factoren negatief waardeert en tevens voldoet aan een Pareto criterium. Welk proces te verkiezen is hangt af van waardeooi-delen. Het tweede hoofdstuk gaat na wat de implicaties zijn van het gebruik van dergelijk criteriunl voor d e evaluatie van intei-generationele verbanden, verbanden tussen de startpositie van het kind en datgene wat het kind later realiseert. On1 deze verbanden te evalueren wordt gewerkt met zeer algeinene functionele vormen: Cen om de realisaties te evalueren en CCn om verschillende graden van ongelijkheidsaversie m.b.t. de kansen van kinderen weer te geven. Dit laat toe om condities af te leiden die gelden voor sterk verschillende waardeoordelen. De condities opdat een process beter is dan een ander zijn de traditionele stochastische dolninantie voorwaai-den. E e n meer gelijke verdeling van kansen is eveneens welvaartsverhogend. In het derde hoofdstuk wordt aangetoond dat het verdedigde criterium kan gebruikt worden in een theoretisch model waarin altruistische ouders het welzijn van hun kinderen proberen te verhogen door te investeren in het inenselijk kapitaal~7anhun kindei-en, of door een gedeelte van hun vermogen na te laten. Omdat erfenissen niet negatief kunnen zijn, en verondersteld wordt dat kleine investeringen in menselijk kapitaal een hogere opbrengst garanderen dan erfenissen, zullen niet alle ouders beide transmissie kanalen gebruiken. Arlne ouders zullen alleen investeren in het lnenselijk kapitaal van hun kind, teiwijl rijke ouders zowe1 erfenissen nalaten als investei-en in het menselijk kapitaal van hun kind. Welk van twee, op dergelijke beslissingen van ouders gebaseerd proces te verkiezen is, hangt af van econolnische factoren, zoals belastingen op erfenissen, de subsidiering van ondeiwijs of de belasting op arbeid, en de preferenties van ouders.
Essays on Redistributive Taxation when Monitoring is Costly
Fred Schi-oyen D e standaard litei-atuur omtrent optinlale belastingscl~ema'sgaat er van uit dat de overheid de markttransacties die burgers ondernemen - zoals de aankoop van consumptiegoederen, of het inkonlen dat men ontvangt voor gepresteerde arbeid - op perfecte en kosteloze wijze kan vaststellen. Wanneer dit het geval is kan zij deze transacties gebruiken om de belastingplicht van ieder van haar burgers te bepalen.
In werkelijkheid stcllen we echter vast dat de overheid over een heleboel transacties niet automatisch gei'nformeerd wordt, en dit geeft aanleiding tot belastingontduiking. W7at de overheid wC1 kan doen is belastingplichtigen aan een fiscale controle onderwerpen om zo bijkomende informatie tc venven/eii. en eventueel burgers tot cen fiscale boete te veroordelen wanneer door een controle ontt verwerven van bijkomende duikingsactiviteiten aan het licht komen. O ~ n d a het informatie meestal een kostelijlie activiteit is voor- dc overheid, is de kans voor een burger 01x1 gecontroleerd en - ingeval van ontduiking - veroordeeld te wordell in d e meeste gevallen kleiner dan ken. In de vcrschillende essays tracht ik een antwoord te leveren op twee vragen. nl. (i) hoe gedraagt een belastingplichtige rich wanneer de overhcid sommige trailsacties uitgevoerd op tweedehandsmarkt niet kan waarneinen, of wanneer de overheid slechts met een beperkte kans over de vergoeding voor arbeid geleverd o p eel1 zwarte arbeidsmarkt gelnformeerd wordt, en (ii) gegeven het antwoord op (i) en het kostelijk karakter van fiscale controles, hoe ziet het optimale (nietlineaire) belasting- en controlebeleid van de overheid eruit en welke fiscale boeten dient zij op te leggen aan zwarte marktactiviteiten en andere vormen van ontduiking? E e Weedc vraag word: zowel analytisch S!:: d.m.v. numerieke simulaties beantwoord. D e essays dragen de volgende titels: Essay 1: Pareto efficient tax structures with side trading possibilities; Essay 2: The comparative statics of tax evasion with elastic labour supply; Essay 3: Optimal non-linear taxation when income from elastic labour supply is costly to monitor - I: Predetermined proportional penalty policy; Essay 4: Optimal non-linear taxation when income from elastic labour sr~pplyis costly to monitor - 11: Endogenous penalties.
In the standard literature on optimal taxation, the crucial assumption is that the government can observe in a perfect way and without incurring any cost the transactions people undertake on markets - like the purchase of commodities - or the earnings received for labour services. In this case, the government can condition a citizen's tax liability on these observable transactions. However, in reality the government is not a priori perfectly informed about many transactions, and this gives rise to evasion activities. Still, the government is able t o submit taxpayers to an audit in order to obtain additional information, and to convict people to a fiscal penalty when cvasioil activities are discovered by the audit. Because such monitoring activitics are costly for the government, the probability for a citizen to be audited is in general smaller than unity. In different essays I attempt to answer two questions, viz (i) which economic behaviour does a citizen display when the gover~linentcannot observe certain side transactions at all or whcn the probability of being audited on black market earnings is smaller than unity; and (ii) given the answer to (i) and the fact that auditing of citizens requires resources, how does the optimal (non-linear) tax- auditand penalty policy look like. The second question is answered both analytically and by ~ n e a n sof numerical calculations. The essays are entitled as follows: Essay I: Pareto efficient tax structures with side trading possibilities;
Essay 2: The comparative statics of tax evasion with elastic labour supply; Essay 3: Optimal non-linear taxation when incolne from elastic labour s~lpplyis costly to monitor - I: Predetermined proportional penalty policy; and Essay 4: Optimal non-linear taxation when incolne from elastic labour supply is costly to noni it or - 11: Endogenous penalties.
Spillovers and Cooperation in Research and Development / Oversijpelingseffecten en samenwerking in onderzoek en ontwikkeling Geert S t e ~ ~ r s
Economen zijn het er vandaag de dag over eens dat onderzoek en ontwikkeling ( 0 & 0 ) een sleutelfactor is, niet alleel1 bij de analyse van een individuele industrie doch ook vanuit een algelneen economisch en welvaartsperspectief. D e centrale rol die technologische vooruitgang speelt impliceert dat voldoende aandacht moet worden besteed aan de prikkels voor bedrijven om te innoveren en nieuwe technologieen te introduceren. D e kans is evenwel reeel dat het marktmechanisme te lcort schiet in het ontwikkelen en verspreiden van innovaties. E e n reden voor marktfaling is het bestaan van oversijpelingseffecten die tot gevolg hebben dat onderzoek dat wordt uitgevoerd door een onderneming kan gebruikt worden door andere ondernemingen zonder dat deze hiervoor een vergoeding betalen. Tot op heden werd in d e theoretische literatuur enkel de invloed van intra-industriele oversijpelingseffecten geanalyseerd, d.w.z. oversijpelingseffecten tussen ondernemingen in dezelfde industrie, terwijl de empirische literatuur aantoont dat interindustriele oversijpelingseffecten, d.w.2. oversijpelingseffecten tussen ondernemingen die actief zijn in verschillende industrieen, belangrijk(er) zijn. Daarom concentreren we ons in het eerste deel van de thesis o p de invloed van intra- versus interindustriele oversijpelingseffecten op de innovatieve inspanningen en de winsten van de ondernemingen, het niveau van het consumentensurplus en de welvaart. We maken daartoe gebruikvan een twee-fazen spel waarin de ondernemingen investeren in O & O in de eerste faze en beslissen ovel- hun outputniveau in de tweede faze. D e resultaten tonen aan dat interindustriele oversijpeli~lgseffecten niet alleen eel1 invloed hebben op zich, doch ook de i~npactvan intra-industriele oversijpelingseffecten belnvloeden. Vervolgens vergelijlcen we intra- en interindustriele samenwerking in 0 & 0 . Het belangrijk verschil tussen beide is dat de samenwerkende ondernemingen in het geval van interindustriele O&O-samenwerking niet Inet elkaar concurreren in de produktmarkt. We tonen aan dat interindustriele O&O-samenwerking met eel1 grotere waarschijnlijkheid resulteert in hogere O&O-investeringen en welvaart. In het tweede deel van de thesis analyseren we de invloed van nationale en internationale oversijpelingseffecten wanneer ondernemingen concurreren of samenwerken in ofwel geintegreerde ofwel gesegmenteerde markten. We doell dit olndat de bestaande theoretische analyse van oversijpelingseffecten en samenwerking in O & O gebeurt in een gesloten econornie-omgeving. Gebl-uik lnakend van een gelijltaardig twee-fazen model tonen we aan dat wanneer markten integreren en de ondernemingen nlet elkaar concurreren. de impact van internatio-
nale oversijpelingseffecten vergelijkbaar wordt nlet die van nationale oversijpelingseffecten. Bovendien leidt marktintegratie slechts tot hogere iilvesteringen in O & O indie11de internationale oversijpelingseffecten niet te groot zijn. Indien we nationale en internationale O&O-samcnwerking met elkaar vergelijken dail stellen we vast dat internationale O&O-samenwerking altijd resulteert in hogere 0&O-investeringen en welvaart in gesegmenteerde inarkten terwijl dit minder waai-schijnlijk is in ge~ntegreerdemarkten. Tenslotte is het aainnoedigen van nationale sameilwerking in 0 & 0 te verkiezen onder ruimere oinstandigheden wanneer marktcn gei'ntegreerd zijn dan wailileer ze geseginenteerd zijn.
Economists today agree on the idea that research . ~ n ddevelopment (R&D) is a key factor, not only in the analysis of an individual industry but also from econoiny-wide and social welfare perspectives. The central role technological pro2ttention should be bn Ai.~ i ~ton a fiIml3sincenti\~es gress P!zys implies that for innovating and adopting new technologies. However, the market mechallisin is likely to fall short of optinlality in the development and dissemination of innovations. One cause for market failure is the existence of R & D spillovers, which imply that the research done by one firm can be used by other firms without the latter purchasing the right to do so. U p to now, the theoretical literature only examined intra-industry R & D spillovers, i.e., R & D spillovers between firms operating in the same industry, while in the empirical literature, interindustiy R&D spillovers, i.e., R & D spillovers between firms operating in different industries, are found to be (more) important. Therefore, the first part of this dissertation focuses on the impact of intra- versus interindustry R & D spillovers on a firm's innovative efforts and profits, the level of co~lsumersurplus and welfare. We do so using a two-stage game framework where firms invest in R & D ill the first stage and must devide on their output in the second stage. The results show that interindustry R&D spillovers not only have an impact on their own, but they also influence the impact of intra-industry R & D spillovers. Subsequently we compare the outcomes of intra- and interindustry R & D cooperation. The important difference between both is that the cooperating firms in an interindustry R & D agreement do not compete with one another in the product market. We show that interindustry R & D cooperation is more likely to result in higher R&D-investment and welfare levels. In the second part of this dissertation we analyze the impact of national and international R & D spillo~ierswhen firins compete or cooperate in both seginented and integrated markets. The reason we do so is because of the siinple observation that the existing theoretical discussions on R&D spillovers and cooperative R & D are done in the context of closed-economy models. Using a similar twostage game framework, we show that when markets integrate and firms compete in R & D , the impact of international R&D spillovers becomes similar to the impact of national R&D spillovers. Moreover, market integration leads to higher R & D investments only when international R & D spillovers are not too high. Coinparing national with international R & D cooperatioil we find that international R & D cooperation always results in higher R&D investments and welfare in segmented markets while this is less likely when inarkets are integrated. Finaly, pro-LA
moting national RGrD cooperation is preferable under a wider range of circumstances when markets are integi-ated than when they are segmented.
Evaluation of Interest Randomness in Actual Quantities / Eraluatie van het effect van stochastische intrest in actnarikle grootheden
Leen Teunen The interest is an importallt factor in :ictuarial quantities. Normally, the interest is considered to be a certain constant. In this dissertation we consider the interest randomness described by a general Gaussian process or a Brownian motion. First, we have determined the probability that the surplus of an insurance company crosses a piecewise linear upper and/or lower boundary. This plays an important ro!e in the determination of the ruin probabilit.7 -J or the r nrnbability Athat the insurance company has to pay a dividend. In other research fields, these probabilities are relevant as well, e.g. to deter-mine the order quantity in invent017 theory. In a second part we have determined the density function of the discount factor, which one needs to calc~llatecertain actuarial quantities, such as the premium. In a third part we have developed a method to evaluate the density function of AAA
t
d
waarbij Y(T)
0 a function of time and x(7) a Brownian motion, by making use of Feynman integrals and related partial differential equations. This is important-when determination of the financial risk, the premium etc. is required. We have illustrated this method by calculating the density function for a continuous annuity, a t-year life annuity and a t-year term insurance. We have given also some numerical results foi- these quantities. An expression for the price of an Asian option and for the IBNR-reserve is derived as well. In a last part we have determined the density f~ulctionof an annuity that may not cross a certain lower bound.
Intrest speelt in actuarikle grootheden een belangrijke rol. Meestal wordt deze als coilstailt beschouwd. Wij nemen aan dat de (fluctuatie van de) intrest kan beschreven worden door een veralgemeend G a ~ ~ s s i s cproces h of door een Brownse beweging. In eel1 eel-ste deel wordt de ltans bepaald dat het surplus bepaalde grenZen overschrijdt. Deze kansen spelen bijvoorbeeld een rol bij het bepalen van het uitkeren van een dividend, maar ook bij het bepalen van de ru'inekans. In anderc onderzoeksdomeinen zijn deze kansen eveneens relevant. Z e wordell o.a. gebruilit in de voorraadtheorie om het bestelniveau te bepalen. In een tweede dcel
hebben we de dichtheidsfunctie bepaald van de discontovoet. Aan d e hand van deze functie kunnen we een aantal grootheden van bepaalde verzekeringscombinaties, zoals de premie, berekenen. In een derde deel hebben we met behulp van Feynman integralen en de bijhorende partiele differentiaalvergelijkingen een metl~odeontwikkeld om de verdelingsfunctie te bepalen van
t ' C
d
waarbi j ?('C)
0 een monotone functie is van de tijd en waarbij x ( ~ een ) Brownse beweging is. Aan de hand van deze kunnen we de premie, het financiele risico van d e verzekeraar, ... bepalen. Dit wordt numeriek uitgewerkt voor een continue annulteit, eel1 t-jarige ann~iiteiten een t-jarige overlijdeilsverzekering. Ook wordt een uitdrukking voor de prijs van een Asiatische optie en voor de IBNR reserve gegeven. Tot slot wordt de verdelingsfunctie bepaald van een annulteit die een bepaalde ondergrens niet mag overschrijden. 11. REPRINTS
Zie Vol. XXXVIII, 4, 1993 voor de Reprints 374-453. 454. Abraham, F., 1993, "Delocalisatie" (delocalisation) in Centrale Raad voor en Delokalisatie, verslaghet Bedrijfsleven, Internationale Conc~~rrentie boek, 9-11. 455. Abraham, F., 1993, D o We Need a European Sociale Dimension?, Cahiers d'Ecoizomie (Centre Universitaire de Luxembourg), 55-65. 456. Abraham, F., 1993, Regional Adjustment and Wage Flexibility in the European Union, in Krugman, P. and A.Venables, eds., The Location of Economic Activity, New Theories and Evidence, (CEPR, London), 445-476. 457. Abraham, F,; 1993, L'Union EuropCenne conduit-elle au FCdCralisme? (Does European Union Lead to Federalism?), in Comit6 pour 1'Histoire Economique et Financikre d e la France, D u Franc PoincarC a l'ECU, (Imprimerie Nationale de la France), 751-768. 458. Abraham, F.. 1993, The Social Dimension of an Integrated EC-Nordic Economic Area, in Lundberg, L. and Fagerberg J., eds., European Integration: a Nordic Perspective, (Aldershot. Avebury), 313-331. 459. Bettendorf, L. en Pepermans, G., 1993, Hervormingen van het pensioensysteem: wie draagt de lasten, Ouderen in Solidariteit. Vlaams Welzijnscongres, 415-427. 460. Buyst, E., 1993, D e bouw, eel1 soepele' sector, in Loeckx et al, We,pijs wonen, (Davidsfonds, Leuven). 51-54. 461. Buyst, E., 1993, The Decline and Rise of a Small Open Economy: the Case of Belgium, 1974-1990, in Aerts, E . et al., Studia Historica Oeconomica. Liber Alumnorum Herman Van der Wee, (University Press, Leuven), 7180. 462. Cuijpers, C., 1993, D e doeltreffendheid van de EG-energie- en koolstofbelasting in het federaal Belgie, Milieu- erz fiiligheidsmarzagel77e11f,7 , 1-32. 463. D e Bondt, R., 1993, Asyinmetrische informatie en marktwerking, Econornisclz eel? Socimal Tijdschiift 47, 635-662.
v
464. Dellas: W. and E Canova, 1993. Trade Interdependence and the Interna34, 23-47. tional Business Cycle. Jouvznl of lrzterrzatioi~alEco~~oinics 465. Dedene, G., and M. Snoeck, 1993, Object Oriented Modeling: a New Language for Consistent Business Engineering, EastEurOOpe 1993 Conference, November 13-17, (Bratislava, Slovakia). 466. Dellas. H., C. Christou and A. Gagales, 1993, Optinlal Monetary Policy: a New Test, Jololir11al of Policy Modelil~g15, 2, 179-197. 467. Dellas, H. and A. Stockman. 1993, Self-Fulfilling Expectations, Speculatiof Money Credit arid Barzkii~g25,4, ve Attack and Capital Controls. Jo~~i~izal 721-730. 468. Dellas, H. and B. Zilberfaarb, 1993. Real Exchange Rate Volatility and International Trade: a Reexarnination of the Theo~y,Soz~t1zer.i~ E C O I Z OJoz~r~~I~C nal59, 4, 641-647. 469. Dercon, S., 1993, Peasant Supply Response and Macroeconomic Policies: ZN/ Econonzics 2, 2, 157-194. Cotton in Tanzania, J O Z I ~ IofAfrican 470. Heremans, D., 1993, Economic Aspects of the Banking and the Investment Services Directives in a European Economic Area, in J.S Stuyck and A. Looijestijn-Clearie, eds.. The European Economic Area EC-EFTA. Institutional Aspects and Financial Services, (Kluwer Law and Taxation PLIblishers, Deventer-Boston), 105-177. 471. Heremans, D., 1993, Economic Aspects of the Second Banking Directive and of the Proposal for a Directive on Investment Services in the Securities Field, in J.Stuyck, ed., Financial and Monetary Integration in the Euit~, Monographs,-(~luwer,Devenropean ~ c o n o m i c ~ ~ o m r n u nEuropean ter). 37-55. 472. Peeters, T., 1993, De modernisering van de financiele markten in Belgie: bedenltingen en achtergronden, Acadelniae Ailnlecta 55, 1, 67-84. 473. Pepermans, G., 1993, De inkomens- en prijsgevoeligheid van de Belgische private consumptie in het Interbellum, in Aerts, E., Henau, B., Janssens. P. and Van Uytven, R., eds., 1993, Studia Mistorica Oeconomica. Liber Alumnorum Herman Van der Wee, (Leuven University Press; Leuven), 311-324. 474. Schokkaert, E., 1993, Cynische boekhouders en bevlogen profeten, in K.Boey, TVandevelde en J. Van Genven, red., Eel1 prijswaardige economie, (Centrum voor Ethiek, Antwerpen), 291-315. 475. Schokltaert, E., 1993, Eigenbelang en wereldsolidariteit: een termijnperspectief, De gids op mantschnppelijk gebied 84, 881-893. 476. Schokkaert. E., 1993, Fkdkraliser la SCcuritC Sociale: chriffres et valeurs, La R e i : ~ Nouvelle, ~e 49-57. 477. Schokkaert, E., 1993, Wapenhandel, mensenrechten en de ethische grenZen aan de markt, (College voor Ontwikkelingslanden, Antwerpen), 81-98. 478. Scholtkaert, E. en A. Decoste', 1993, Federalisering van de sociale zekerheid: cijfers en waarden, Degids op 17znatschn~~pelijlgebied 84, 10.747-759. 479. Schokkaert, E., J. Eyckmans and S. Proost, 1993, Efficiency and Distribution in Greenho~lseNegotiations, Kyklos 46, 3. 363-397. 450. Sercu, P. and D. Miltz, 1993, Accounting for New Financial Instruments, Jozrrnnl of Business Finance arzd Accor~~~liizg 20. 2. 275-290. 481. Steenkamp, J-B., 1993, Food Consumption Behavior in G.J.Bamossy and W.E Van Raay, eds., European Advances in Consumer Research 1: (Association for Consumer Research), 401-409.
482. Steenkamp, J-B., 1993, Etnocentrisme bij Europese consumenten, Tijdschrift voor Mnlketiizg 27, December, 19-25. 483. Steenkamp, J-B., 1993, Kwaliteit van diensten: enige inzichten uit de economische theorie. Mnnrzdblnd voo~.Accourztarzcen Bedrijfsecoizonzie 67, December, 589-598. 484. Steenkamp. J-B., 1993, Internationaal consumcntengedrag Inet speciale aandacht voor de Triade-machten, in B.A. Bakker en J.F. Lalnan Trip, eds., Handboek Internationalisatie. (San~son,Amsterdam), E310011-29. 485. Steenkamp, J-B., Kamakura, W., Novak, T.P. and T.M.M. Verhallen, 1993, Identification de segments de valeurs pan-europ6ens par un modele ligit sur les rangs avex regroupelnents successifs; Recherche et Applicalioizs er? Morketiizg 8, 4, 30-55. 486. Steenkamp, J-B., T.M.M. Verhallen. J.H. Gouda; W. Kamakura en T.P. Novak, 1993, De zoektocht naar de Europese consunlent: heilige graal of kansrijke missie?, Tijdsclzrift voor~1Mnrketirzg27, September. 17-23. 487. Steenkamp, J-B. and M. Wedel, Guzzy Clusterwize Regl-ession in Benefit Segmentation: Application and Investigation into its Validity,Journal of Busiitess Kesear.ch 26, 3, 231-249. 488. Teunen, M. and M. Goovaerts, 1993, Boundary Crossing Result for the 489. Teunen, M. and M. Goovaerts, 1993, Discount Factors under Random Interest Rates, Kolzirzklijke Vererzigiizg ~lnrlBelgisclze Act~lnrissei~ 85, 13-22. 490. Van Cayseele, P,, Boukaert, J. and H. Degryse, 1993: Credit Market Structure and Information Sharing Mechanisms, in Van Witteloostuijn, ed., Studies in Industrial Organisation: Market Evolution: Conlpetition and Cooperation Across Markets and Over Time, (Kluwer), 129-143. 491. Van Cayseele, P. and Th. Van Dijk, 1993, The Econoinic Implications of Convergent Patent Systems in Europe, in Hagedoorn, J., ed., Technical Change and the World Economy, (Edward Elgar Publishing, Cheltenham). 492. Vanthienen, J. and Dries, E., 1993; Illustration of a Decision Table Tool for Specifying and Implementing Knowledge Based Systems, Fifth International Conference on Tools with Artificial Intelligence (TAI). (Boston. Mass.). 198-205. 493. Vanthienen, J., and E Robben, 1993, Developing Legal Knowledge Based Systeins Using Decision Tables, Proceedings of the Fourth International Conference on Artificial Intelligence and Law (TCAIL), (Amsterdam), 282291. 494. Vanthienen: J. and M. Snoeck, 1993, Knowledge Factoring using Normalization Theoly, Proceedings of the International Synlposium on the Management of Industrial and Corporate Icnowledge (ISMICK'93), (Compiegne), 97-106. 495. Vanthienen, J.. Van Buggenhout, T., Schepers, J., Van Buggenhout. B. Wets, G. and L. De Smedt, 1993, The Decision Table Technique as Part of a Computer Supported Procedure of Legal Drafting, Proceedings of the Sixth International Conference on Legal Ihowledge-Based Systems (JURIX'93) Intelligent Tools for Drafting Legislation and Computer-Suppol-ted Comparison of Law, (Enschede). (Iconinklijke Vermande BV Lelystad), 71-80. 496. Vanthienen, J. and G. Wets, 1993. Building Intelligent Systems for Management Applications using Decision Tables, Proceedings of the Fifth An-
497. 498. 499. 500. 501. 502.
503. 504. 505. 506.
507.
508. 509. 510. 511. 512. 513. 514.
nual Conference 011 Intelligent Systems in Accounting, Finance and Management (ISAFM). (Stanford. California) 1-16. Veugelers, R.: 1993, Reputation as a Mechanis~nAlleviating Opportunistic Host Government Behavior against MNE's. The Joz~i.izalof Iizdz~strinl Ecor~olnics61, 1, 1-17. Wets. G., 1993. Reverse Engineering: State-of-the-Art, hzforbrlnatie, 2, 10211 1.. Wijsen, J., 1993, ATheory of Keys for Temporal Databases, in Actes 9kmes Journees Bases de Donn6es AvancCes, (Toulouse, France), 35-54. Abraham, E, 1994: Social Protection and Regional Convergence in a European Monetary Union. Open Ecol~olniesReb'iew 5. 89-115. Abraharn, F. and P. Van Rompuy, 1994, Regional Convergence in the ELIropean Monetary Union,. P~zpersin Regiolziil Scielzce, to appear. Berlage, L., 1994, The Structure, Performance and Develop~nentof the Manufacturing Sector in Burundi, Regional Program on Enterprise Development, Report on the First Wave of the B~irundiSurvey, (Catholic University of Lcuven, UniversitC de Burundi). Bettendorf, L. and Blomme, J., 1994, An Enlpirical Study of the DistribuSocinle Research tion of Crops in Agricultural Land in Belgium, Historic~~l 19. 53-63. Buyst, E., Smits. J.P.1-I. znd J.L. van Zanden, 1994, National Accounts for the Low Countries: the Netherlands and Belgium, 1800-1990, N.W. Posthumus Institute: Groningen. Nederland) 1-26. Cuijpers, C.; 1994, Regionale Energie- en Milieuverlienning in Belgie 2000, Energie & Milieu 10, 2, 35-43. Cuijpers, C. en S. Proost, 1994, Energiesector Hfdst. 11.3 en IV.3, in A. Verbruggen, red., Leren on1 te keren. Milieu- en Natuurrapport Vlaanderen 1994, (Vlaamse Milieunlaatschappij, Garant Uitgevers, LeuvenJApeldoorn), 83-96 en 555-566. Cuijpers C. en S. Proost, 1994, Verkeer envervoer, Hfdst. 11.8 en IV.8 in A. Verbruggen, red., Leren 0111te keren, Milieu- en Natuurrapport Vlaanderen 1994, (Vlaamse Milieumaatschappij, Garant Uitgevers, Leuvenl Apeldoorn). 143-154 en 619-629. De Bondt, R. and I. Henriques, 1994, Strategic Investment with Asyrr~n~etric Spillovers, Cnnndiarz Jozln~nlof Econonzics, (forthcoming). Dedene, G.:A. Depuydt; M. Snoeck and M. Verhelst, 1994, Object-Oriented Systems: from Conception to Deliveiy, Object Technology 94 Conference, (Oxford UK). Dedene, G. and M. Snoeck, 1994, M.E.R.O.D.E.: a Model-Driven EntityRelationship Object-Oriented Development Method,ACMSIGSOFT Softwore Ebrlzgi~zeerirzgNotes 13, 3, 51-61. Dekimpe, M. and D.M. Hanssens, 1994, The Persistence of Marlteting Effects on Sales, M~zrketirzgScierzce, (forthcoming). Dellas, H. and V. Koubi. 1994, Smoke Screen: a Theoretical Framework, Public Clioice 78, 351-358. Dhaene, J. and. De Pril. 1994, On a Class of Approximative Colnputation Methods in the Individual Risk Model, Ins~~rnr~ce: Mntlzenmtics and Economics 14, 181-196. Dhaene, J. and M. Vandebroek, 1994, Recursions for the Individual Model, Insumizce: Mnthenztrtics abrl~dEconomics. (forthcoming).
515. Heremans, D. en P. Van Cayseele, 1994, Branchevervaging: algenleen kader in economisch perspectief. in Cousy, H. en H . Claassens. Bank, financiewezen en verzekering, (Maklu, Antwerpen). 13-27. 5 16. Janssens, ha., 1994, Evaluating International Managers' Performance: Pareiit Company Standards as Control Mechanism, Intei7zatiorinl J o z ~ mof l Humail Resozrrce Maizageinenr 5, 4,849-869. 517. Janssens, M. and J.M. Brett, 1994, Coordinating Global Companies: the Effects of Electronic Communication. Organizational Commitment; and a Multi-Cultural Managerial Workforce, Eelzds irz Orgai~izatioi7alBehavior 1, 31-46, (Wiley and Sons, New York). 518. Lagae, W., 1994, Creditor Government Support for Bank Debt Relief of Less Developed Countries: a Political Economy Approach, in Ecoiiomie over de grenzen heen, beleidsanalyse in internationaal perspectief, (Centrum voor Economische Vorlning en Onderzoek, Handelshogeschool, Antwerpen, 269-285. 519. Lagae, \V.. 1994? Voluntary Capital Market Financing for Latin America in 1990-'93: a Private Creditors' View. Oliderzoeksrapport nr 04. (Centrum voor Econoinische Vorming en Onderzoek, Handelshogeschool, Antwerpen). 520. Mayeres, I., 1994, The Marginal External Costs of Trucks: an Empirical Exercise for Belgium, Tijdschrlft Vervoerswete1zschap,2, 121.136. 521. Schokkaert, E., 1994, Harmonie en conflict, ruil en verdeling, in J.Stevens, red., In harinonie Cn conflict, (Altiora, Averbode), 26-45. 522. Schokkaert, E., 1994, Verdelende rechtvaardigheid en schuldverlichting. Ethische Perspecrieven 4, 2, 86-91. 523. Schokkaert, E., and J. Eyckmans, 1994, Environment, in B.Harvey, ed.. Business Ethics: a European Perspective, (Prentice Hall, New York), 192235. Schokkaert, E. en Lagae, W., 1994, Het intrestdebat en de Derde-Wereldschuld, in L. Bouckaert, red., Intrest en Cultuur. Een etliiek van het geld, (Acco, Leuven), 199-217. Schokkaert. E., Maes: J.en Proost, S., 1994, Economische waardering van milieuschade, in A.Verbrugge, red., Lereii om te keren, Milieu- en natuurrapport Vlaanderen, (Garant, Leuven), 709-716. Sercu, P., Prakash Apte and M. Kane, 1994, Relative PPP in the Medium Run, Jounzal of Ir~terizatiorzalMoney and Finance 13, 5, 602-622. Sercu, P. and C. Van Hulle. 1994, The Effect of the Increased Bid Rule in Takeovers with Pivotal Shareholders, Firlance 15, 1, 101-114. Snoeck. M,, 1994, Formele specificaties: de basis voor kwaliteit, hzformatie 36,4, 257-266. Steenkamp, J-B. and H. Baumgartner, 1994. An Investigation into the Construct Validity of the Arousal Seeking Tendency Version 11 Scale, Educatioital and Psyclzological Measuretnerzt, (forthcoming). Steenkamp, J-B. and D.L. Hoffinan, 1994. Marketing and Quality, in J.J. Hampton, ed., AMA Management Handbook, (American Marketing Association; Chicago), 3rd ed., 65-68. Steenkamp, J-B. and D.L. Hoffman, 1994, Price and Advertising as Market Signals for Service Quality, in R.T. Rust and R.L. Oliver. eds., Service Quality: New Directions in Theory and Practice, (Sage, Newbury Park, California), 95-107.
532. Steenkamp, J-B., H.C.M. Van Trijp en J.M.F. Ten Berge. 1994, Perceptual Mapping Based on Idiosyncratic Sets of Attributed, Journal of Marketing Research 31, 15-27. 533. Steenkamp, J-B. and D.R. Wittink, 1994, The Metric Quality of Full-Profile Judgments and the Number-of-Attribute-Levels Effect in Conjoint Analysis, Inter-rzatiotzal Jourr~alof Researclz in Marketing 11, 3, 275-286. 534. Teunen, M. en M. Goovaerts, 1994, Stochastische effecten bij IBNR afschattingen, in Heterogeniteit in verzekering, Liber Amicorum G.TvV.De Wit, 469-478. 535. Teunen, M,, A. De Schepper and M. Goovaerts, 1994, An Analytical Inversion of a Laplace Transform Related to Annuities Certain, Irzsurandce: Mathematics and Economics 14, 33-37. 536. Van Cayseele, P,, 1994, Verankering en groei, Maarzdschrift Econornie 58, 3, 165. 537. Van Cayseele, P. en H. Degryse, 1994, Banken en hun vestigingen, Banken Finatzciewezen 5, 263-272. 538. Van de Gucht, L., S. Chaplinsky and G. Niehaus, Resolving the Controversy over the Valuation of Employee Calims in ESOP Buyouts, Benefits Quarterly, (forthcoming). 539. Vanden Abeele, P. and D.L. MacLachlan, 1994, Process Tracing of Emotional Responses to TV Ads: Revisiting the Warmth Monitor, Journal of Consumer Researclz 20, 4, 586-600. 540. Vanhorebeek, F., 1994, The Unbearable Lightness of Maastricht for the Belgian Budget, Milzisterie van Finartcien Doc~imeiltatieblad3, 105-138. 541. Vanhuele, M,, M.G. Dekimpe, S. Sharma and D.G. Morrison, 1994, Probability Models for Duration: the Data Don't Tell the Whole Story, Orgarzizatiorzal Behavior and Human Decision Processes, (forthcoming). 542. Vanthienen, J.,1994, A More General comparison of the Decision Table and Tree, Comnzunications of the ACM 37, 2, 109-113. 543. Vanthienen, J., and Dries, E., 1994, Decision Tables: Refering the Concept and a Proposed Standard, Cornnzurzications of the ACM, (forthcoming). 544. Vanthienen, J., and Dries, E., 1994, Illustration of a Decision Table Tool for Specifying and Implementing Knowledge Based Systems, International Journal of Artificial Intelligence Tools, (forthcoming). 545. Vanthienen, J. and P. Merlevede, 1994, An Integrated Model for the Scoping Decision of Automated Help Desks, Proceedings of the Second Singapore International Conference on Intelligent Systems (SPICIS'94), (forthcoming). 546. Vanthienen, J., and Wets, G., 1994, From Decision Tables to Expert Systems Shells, Data & Knowledge Engineering, (forthcoming). 547. Vanthienen, J. and G. Wets, 1994, Restructuring and Optimizing Knowledge Representations, Proceedings of the Sixth International Conference on Tools with Artificial Intelligence, (forthcoming). 548. Vanthienen, J., Wets. G. and Dries, E.: 1994, An Expert System Application Generator Based on Decision Table Modeling, Proceedings of the Second World Congress on Expert Systems (WCES), (Lisbon), 1094-1101. 549. Wets, G. and J. Vanthienen, 1994, Interfacing Decision Tables with Knowledge Acquisition Formalisms, The Second World Congress on Expert Systems, January 10-14, (Lisbon), 549-554.
550. Wijsen; J., Vandenbulcke, J. and Olivi6; H.: 1994, Functional Dependencies Generalized for Temporal Databases that Include Object-Identity, in Proceedings 12th International Conference on Entity-Relationship Approach, (Arlington, Texas), Lect~li-e Notes in Computer Science 823, (Springer Verlag, Berlin), 99-109. 551. Wijsen. J., Vandenbulcke, J. and OliviC, H., 1994, On Time-Invariance and Synchronism in Valid-Time Relational Databasea, Jozll.~zalof Conzpz~tii~g irrzd Information l, 1Special Issue Proceedings 6th International Conference on Computing and Information, (Peterborough. ON, Canada). 552. Wijsen, J., Vandenbulcke, J. and OliviC, H.. 1994, Temporal Dependencies in Relational Database Design, in Actes l0kmes Journees Bases de DonnCes AvancCes, (Clermont-Ferrand, France), 157-169. De reprints zijn gratis te verkrijgen bij Mevr. A. Ronsmans, lokaal 00.117, Naarnsestraat 69 - 3000 Leuven, tel. 016/32.66.88. - fax 016132.66.10.
111. ONDERZOEKSRAPPORTEN. DEPARTEMENT TOEGEPASTE ECOFJOMISCHC WETEFJSCIZAPFEFJ O R 9325 R.Veugelers: Global Cooperation: a Profile of Companies in Alliances. OR 9326 C.Van den Acliel-: De invloed van het strategisch belang van inforinatietechnologie op de interne kontrole. O R 9327 R.Veugelers and A.N.Mathur: Foreign Presence of Relgian Companies: the Case of India. O R 9328 R.Veugelers: The Presence of Non-EC Multinationals in European Industry. OR 9329 G.Dedene: The B.S.W.-Methodology: an Integrated Approach to Capacity Planning, Performancc Management and 1.T.-Cost Management in Banking. O R 9330 A.Gaeremynck, The Influence of the Tax Treatment on the Information Value of the Accounting Depreciation Method in Belgiunl. O R 9331 C.Van den Acker en J.Vanthienen: De kontrole van expertsystemen: een empirisch onderzoek bij Belgische bedrijfsrevisoren. O R 9332 B.De Reyck and W.Herroelen: On the Use of the Complexity Index as a Measure of Complexity in Activity Networks. OR 9333 G.Dedene and M.Snoeck: Object-Oriented Modeling: a New Language for Consistent Business Engineering. O R 9334. M.Janssens and J.M.Brett: Coordinating Global Companies: the Effects of Electronic Communication. Organizational Commitment, and a Multi-Cultural Managerial Work Force. OR 9335 M.Jansscns, J.M.Brett and EJ.Smith: Managing Safety Policy Across Cultures. OR 9336 R.Vcugclers: Alliances and the Pattern of Comparative Advantages: a Sectoral Analysis. O R 9337 E.Demculemeester, W.Hel-roelen, and S.Elmaghraby: Optimal Procedures for the Discrete TimeICost Trade-Off Problem in Projcct Networks. O R 9338 Liil Liangqi, and C.Lefebvre: On Voluntary RSrD Accounting: Tax and Contl-actual Cost Effccts.
O R 9339 Dhaene. J. and M.Vandebroek: Recursions for the Individual Model. O R 9401 1.Geykens: The Cognitive Effects of Advertising Repitition: a Review of the Two-Factor Model and its Moderators. O R 9402 M.Lambrecht, Chen ShaoXiang and N.J.Vandaele: A Lot Sizing Model with Queueing Delays: the Issue of Safety Time. O R 9403 R.De Bondt and Changcli Wu: Research Joint Venture Cartels and Wclfare. O R 9404 FS.Desaranno and F.Put: A Metamodel for Office Information Systems: a Statement of Direction. O R 9405 E.L.Demeulemeester and W.S.Herroelen: Modelling Setup Times, Process Batches and Transfer Batches Using Activity Network Logic. r y and Bank Management. O R 9406 R.Vandenborre and J.Meir: C o ~ ~ n t Risk O R 9407 M.G.Dekinlpe and Z.Degraeve: The Attrition of Volunteers: Modeling Issues and Managerial Implications. O R 9408 P.E.Mer1evede and J.J.Vanthienen: An Integrated Model for the Scoping Decision of Automated Help Desks. O R 9409 A.De Schepper: De differentiaalvergelijking van Thiele: enkele toepassingen in het geval van een continue Markov-keten. O R 9410 M.G.Dekimpe and D.M.Hanssens: Four Empirical Generalizations about Market Evolution. O R 9411 B.Cassiman: Research Joint Ventures and Optimal R&D Policy with Asymmetric Information, O R 9412 M.Janssens. M.G.Dekimpe and Z.Degraeve: T~irnoveramong Young Flemish Managers. O R 9413 C.Steyaert and M.Janssens: The World in Two and a Way Out: the Concept of Duality in Organization Theory and Practice. O R 9414 R.Vandingenen: Searching for the Drivers of Firm Performance by Meta-Analyzing Empirical Findings. O R 9415 J.B.Steenkamp and H.C.M. van Trijp: Task Experience and Validity in Perceptual Mapping: a Comparison of Traditional Coinpositional Mapping and Two Consumer-Adaptive Techniques. O R 9416 W.ICumar. L.K.Scheer and J.B.Steenkamp: The Effects of Perceived Interdependence on Dealer Attitudes. OR 9417 N.K~lrnar,L.K.Scheer and J.B.Steenkamp: Powerfi~llSuppliers. Vulverable Resellers, and the Effects of Suppliers Fairness: a Cross-National Study. O R 9418 H.Baumgartner and J.B.Steenkamp: Exploratory Consurner B~lying Behavior: Conceptualization and Measurement. O R 9419 J.B.Stecnkamp, H.Baumgartner and E.va11dcr Wulp: The Relationships among Arousal Potential, Arousal and Stimulus, Attractiveness and thc Moderating Role of Need for Stimulation. O R 9420 R.Veugelers and K.Kestcloot: Bargaincil Shares in Joint Ventures anlong Asylnnletric Partners. O R 9421 P.1vens and M.Larnbrecht: Extending the Shifting Bottleneck Proccdure to Real-Life Applications. O R 9422 G.Chen, J.Vanthienen and G.Wets: Fuzzy Decision Tables: Extending the Classical Formalism to Enhance Intelligent Decision Making. O R 9423 M.R.Larnbrecht and N.J.Vandaele: A Genel-a1Approximation for the Single Product Lot Sizing Model with Queueing Delays.
IV. BEDRIJFSECONOMISCHE VERHANDELINGEN, DEPARTEMENT TOEGEPASTE ECONOMISCHE WETENSCHAPPEN BV 9401 C.Van Hulle: Azir Carpets: a Case Study. BV 9402 E.Durinck, E.Laveren en C.Van Hulle: Tarificatiemethodes van roerende financiele leasing in Belgie: een beeld van de evolutie tussen 1985 en 1990. De publikaties vermeld onder rubrieken 111 en IV kunnen bekomen worden bij Mevr. M. Wouters, Departement Toegepaste Economische Wetenschappen, Naamsestraat 69, 3000 Leuven, lokaal 01.104, tel. 016132.67.01 of fax 016132.67.32. V. ONDERZOEKSRAPPORTEN, DEPARTEMENT ECONOMIE
1. Algemene reeks 13. T.Van Puyenbroeck: Het keurslijf van Fortuna: een discussienota omtrent de overheidsioterij. 14. H.Degryse en P.Van Cayseele: Banken en hun vestigingen. 2. International Economic Research Papers 95. EAbraham: Regional Adjustment and Wage Flexibility in the European Union. 96. M.B.Canzoneri and H.Dellas: Real Interest Rates and Central Bank Operating Procedures. and the Cost and Pro97. H.Dellas and A.M.Igier: Alternative Debt Instr~~ments bability of Debt Stabilization. 98. P.De Grauwe: Fiscal Federalism and Debt Management. The Case of Belgium. 99. D.Guillaume: A Low-Dimensional Fractal Attractor in the Foreign Exchange Markets. 100. W.Dellas, K.Salyer: Monetary Policy, Interest Rates and Economic Activity. 101. H.Dellas, D.Mueller: Market Structure and Growth. 102. Ph.Bacchetta, H.Dellas: Firm Restructuring and the Optimal Speed of Trade Reform. 103. M.Van de Sande Bakhuyzen: Endogenous Growth and Intergenerational Transmission of Wunlan Capital: a Continuous-Time Approach. 3. Financial Economics Research Papers 16. J.Duldelnont: Demographics, Social Security and Private Saving: a MacroEconometric Analysis for Belgium. 17. J.Bouckaert, H.Degryse: Phonebanking. 4. Public Economics Research Papers
32. TVan Puyenbroeck: Discriminating between Efficient Decision Making Units: a Modified Free Disposal Hill Approach.
33. J.B.Braden, S.Proost: Economics Assessment of Policies for Colnbatting Tropospheric Ozone in Europe and the U.S. 34. B.De Borger, I.Mayeres, S.Proost, S.Wouters: Social Cost Pricing of Urban Passenger Transport: with an Illustration for Belgium. 35. ESchokkaert en D.Van de Gaer: Equality of Oppol-tunity and Intergenerational Transmission PI-ocess. 36. A.Decoster, D.Rober, H.Van Dongen: Users' Guide for Aster. A Microsimulation Model for Indirect Taxes. 37. A.Decoster: A Microsi~nulationModel for Belgian Indirect Taxes. With a CarbonIEnergy Thx Illustration. 35. EVanhorebeek, P.Van Ron~puy:Testing the Intertemporal Government Budget Balance. Evidence for the ERM-Countries, 1960-1993, an Historical Evidence for Belgium? 1870-1959. 39. D.Van de Gaer: Evaluating Inequality of Opportunity and Intergenerational Mobility. 5. Research Papers in Economic Developlnent 23. L.Baeck: The Econonlic Thought of Classical Islam and its Revival. 24. L.Berlage, R.Renaud: Evaluatie van ontwikkelingshulp in Belgie en Nederland 25. L.Berlage, G.Van Dille: The Post-War Evolution of the Terms of Trade of Less Developed Countries' Commodity Exports. 6. Werkgroep Quantitatieve Economische Geschiedenis 94.01 E.Buyst: Het inkomen uit onroerend vermogen toevloeiend aan particulieren, 1920-1939. 7. keuvense Economische Standpunten 73. 74. 75. 76.
P.De Grauwe: De sterke frank en de staatsschuld. EAbraham: Hoe arbeidsvriendelijlt is ons loonbeleid? P.De Grauwe: Werktijdverkorting en tewerkstelling. W.Vanhaverbeke: Het ruinltelijk structuurplan Vlaanderen: een beleidsinstrument voor eco~~omische ontwiltkeling?
De onderzoeksrapporten en Leuvense Economische Standpunten zijn, indien nog in voorraad, te verkrijgen op het C e n t r ~ ~voor m Economische Studikn, Naamsestraat 69 - 3000 Eeuven, lokaal 02.105, tel. 016132.67.25. De prijs van de Leuvense Econo~nischeStandpunten bedraagt 100 Bfr per aflevering, te storten op PRK 000-0544830-78.
RICHTLIJNEN VOOR AUTEURS 1 . Te publiceren artikels moeten in drie exemplaren gezonden worden aan: T jdschrift voor Economie en Management Prof. Dr. P. Van Cayseele pia Mevr. A. Ronsmans Redactiesecretariaat Dekenstraat 2 - 3 0 0 0 Leuven, Belgium. Elders gepubliceerde artikels worden niet aanvaard.
2. De teksten moeten getypt zijn op een kant van het papier, met dubbele interlinie met duidelijke vermelding van de auteursnaam. Artikels mogen in het Engels of het Nederlands gesteld zijn. Er kunnen nochtans slechts een beperkt aantal Engelstalige bijdragen per volume worden opgenomen. 3. Elk artikel dient een uitgewerkte inleiding en besluit met samenvatting te bevatten, zodat de essentie en relevantie van de probleemstelling alsook de eigen bijdragen van de auteurs duideiijk, overzichteiijk en surrirriier aati de iezei k u n n e ~ overkoi nien.
4. Voetnoten moeten tot een minimum herleid worden en opeenvolgend genummerd. Ze worden achteraan de eigenlijke tekst samengebracht. 5. De conventies van referenties zijn dezelfde als deze van de "European Economic Review". In de tekst voorkomende referenties moeten als volgt vermeld worden: "Zo betogen Goldfeld en Quandt ( 1 9 7 3 ) ... of "Deze beslissingstabellen zie Verhelst ( 1 9 8 0 ) . De lijst van referenties als volgt: Voor boeken: Verhelst, M , , 1 9 8 0 , De praktijk van beslissingstabellen (Kluwer, Deventer-Antwerpen). Voor periodieken: Goldfeld, S. and Quandt, R.E., 1 9 7 3 , A Markov Model for Switching Regressions, Journal o f Econometrics l , 3 - 1 5. Voor verzameld werk: Taylor, B., 1 9 7 0 , Financing Tables and the Future, in Taylor, B. ed., Investment Analysis and Portfolio Management, (St. Martin's Press, New York), 3 7 8 - 3 8 6 . 6. De in artikels voorkomende figuren moeten getekend zijn op een apart blad ( 1 origineel en 2 kopies) in zwarte inkt, op de achterzijde dient vermeld: naam auteur, titel artikel en figuur nummer.
7. De auteurs verbeteren de bandproeven. Extra-correcties op de drukproeven (d.w.z. verbeteringen die op afwijkingen van de ingezonden teksten neerkomen) brengen kosten met zich mee, ten laste van de auteurs. 8. De gebruikte spelling van de Nederlandstalige artikels is de voorkeurspelling. 9. Elk artikel dat aan de hierboven beschreven instructies niet beantwoordt wordt voor nodige herwerking teruggezonden.
INSTRUCTIONS TO AUTHORS 1. Papers for publication should be sent in triplicate to: Tijdschrift voor Economie en Management Prof. Dr. P. Van Cayseele pia Mevr. A. Ronsmans Redactiesecretariaat Dekenstraat 2 - 3 0 0 0 Leuven, Belgium. Submission of a paper will be held t o imply that it contains original unpublished work and is not being submitted for publication elsewhere.
2.Manuscripts should be typed double-spaced on one side of the paper only, and record the author's name clearly. 3 . Each paper should have an elaborate introduction and conclusion with summary. These have to contain the reasons and relevance of the research reported, as well as its main findings and their policy relevance.
4. Footnotes should be kept to a minimum and numbered consecutively. They are put at the end of the text. 5. The conventions for references are those of the European Economic Review. In the text, references to publications should appear as follows: As argued by Goldfeld and Quandt ( 1 9 7 3 ) ... or: "Decision tables ... see Verhelst (1980). The author should make sure that there is a strict "one-to-one correspondence" between the names (years) in the text and those on the list. At the end of the manuscript (after any appendices), the complete references should be listed as: For monographs: Verhelst, M , , 1 9 8 0 , De praktijk van beslissingstabellen (Kluwer, Deventer-Antwerpen). For periodicals: Goldfeld, S. and Quandt, R.E., 1 9 7 3 , A Markov Model for Switching Regressions, Journal of Econometrics l , 3-15, For contributions to collective works: Taylor, B., 1 9 7 0 , Financing Tables and the Future, in Taylor, B. ed., Investment Analysis and Portfolio Management, (St. Martin's Press, New York), 3 7 8 - 3 8 6 . 6. Diagrams should be in a form suitable for immediate reproduction: 1 original draw n in black ink on white paper and 2 photocopies. Care should be taken that lettering and symbols are of a comparable size. The drawings should not be i~iserted in the text and should be marked on the back with figure numbers, title of paper, and name of author.
7. Contributors are responsible for the correction of galley proofs. Corrections other than printer's errors may be charged to the author. 3 copies are supplied free; additional copies are avalaible at cost if they are ordered when the proof is returned. 8. Any manuscript which does not conform to the above instructions may be returned for the necessary revision before publication.