SAMENVATTINGEN AFSTUDEERVERSLAGEN FACULTEIT ELEKTROTECHNIEK 1996
De Technische Universiteit Eindhoven aanvaardt geen aansprakelijkheid voor de inhoud van de in deze bundel opgenomen samenvattingen van afstudeerverslagen.
INHOUD VAKGROEP TELECOMMUNICATIE TECHNOLOGIE & ELEKTROMAGNETISME .......... 5 Leerstoel Radio-Communicatie ............... ·. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Leerstoel Elektro-Optische Systemen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Leerstoel Halfgeleiderbouwstenen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Leerstoel lnformatie- en Communicatietheorie .................................. 21 Leerstoel Elektromagnetisme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
VAKGROEP SVSTEMEN VOOR ELEKTRONISCHE SIGNAALVERWERKING ............ 35 Leerstoel Elektronische Schakelingen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Leerstoel Elektrotechnische Materiaalkunde
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Leerstoel Signaalbewerking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
VAKGROEP MEET & BESTURINGSSVSTEMEN .................................. 59 Leerstoel Meten en Regelen
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Leerstoel Medische Elektrotechniek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Leerstoel Elektromechanica & Vermogenselektronica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
VAKGROEP INFORMATIE & COMMUNICATIESYSTEMEN ......................... 107 Leerstoel Digitale lnformatiesystemen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Leerstoel Automatisch Systeem Ontwerpen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
VAKGROEP ELEKTRISCHE TECHNIEK
......................................
145
Leerstoel Elektrische Energiesystemen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Leerstoel Hoogspanningstechniek & EMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Buiten de faculteit Elektrotechniek
............................................
159
3
VAKGROEP TELECOMMUNICATIE TECHNOLOGIE & ELEKTROMAGNETISME
5
LEERSTOEL RADIO COMMUNICATIE
7
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
S. Khusial 27juni 1996 An antenna for the Base Station of the Median Demonstrator prof.dr.ir. G. Brussaard dr.ir. M.H.A.J. Herben dr.ir. P.F.M. Smulders
Summary This thesis treats the design of an antenna intended for operation at the basestation of a broadband wireless LAN· demonstrator. This demonstrator will be developed in the framework of the ACTS project MEDIAN (Wireless Broadband Customer Premises Network/Local Area Network for Professional and Residential Multimedia Applications). The applied frequency band is 62-63 GHz. According to MEDIAN specifications, the antenna has to radiate downwards and it has to provide a circular footprint at 3 meters below. The diameter of this coverage plane-section should be 8 meters. Two antenna types have been investigated, viz; the bended biconical-horn antenna and the shaped reflector antenna. Analysis of the radiation pattern of the bended biconical-horn antenna yields an unacceptable fluctuation of the fieldstrength in the coverage plane-section of many dB's. Therefore, this option can be ruled out for our application. On the contrary, analysis of the shaped reflector antenna on the basis of Geometrical Optics and Uniform Theory of Diffraction yields promessing results; the reflector with a diameter of only 30 centimeters could be shaped so that the spatial fluctuation of the fieldstrength remains below 0.5 dB in the coverage plane-section whereas outside this coverage plane-section the fieldstrength falls off very rapidly.
8
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
M.J.H. Peters 27juni 1996 lnterworking Functions in a Wireless Broadband Local Area prof.dr.ir. G. Brussaard dr.ir. P.F.M. Smulders
Summary · The reason for this research was the presence of a need for developing and implementing interworking functions in the Advanced Communications Technologies and Services (ACTS) project MEDIAN (wireless broadband customer premises network/ local area network for professional and residential multimedia applications). The MEDIAN project is characterised by: the transmission frequencies, considered to be placed in the millimetre wave band {60 GHz); the used transfer technology, being the Asynchronous Transfer Mode (ATM); the high bitrates available to the users, 155 Mbit/s maximum throughput. The research has been done in association with TNO-FEL and University of Rome "La Sapeinza". In Chapter one an introduction is made to ATM. Because of the large number of networks (telephone network, telex network, private network ) and the services (including multi-media) needed to be transported by those networks, the infrastructure is very complex and the accompanying administration loads are heavy. The simplification of the infrastructure by sharing one network for the transportation of different services could be a solution to both the increasing complexity of the infra structure and the heavy administrative load. This simplification could be carried out by the introduction of the Asynchronous Transfer Mode (ATM). This chapter also contains a description of the MEDIAN project and its scenarios of application. In Chapter two the aspects of signalling, i.e., establishing/maintaining/releasing of a call/connection are treated. This chapter called SIGNALLING describes the different states in which an ATM call/connection can be situated. It continuous with the introduction and explanation of the needed signalling messages. Within the MEDIAN system three different types of call/connections are identified, being an interurban call/connection initiated by a non-MEDIAN end-user or initiated by a MEDIAN end-user, or a local call/connection. The chapter ends with a brief discussion about signalling within the MEDIAN project. In Chapter three the Protocol Reference Model (PRM) of the Broadband Integrated Service Digital Network (B-ISON) is treated. This B-ISON PRM is used to express the layers (functions) needed in the different network units within the MEDIAN system. The B-ISON PRM is subdivided into three layers (physical layer, ATM layer, ATM adaptation layer) which are extensively discussed, and one general layer (describing the higher layers) which is only briefly addressed. Within the MEDIAN system, optical fibre channels and radio channels are being used. Radio channels are not the issue of the B-ISON PRM model, because the B-ISON PRM is developed by people looking from the system point of view and not from the radio point of view. Therefore additional layers are introduced to the B-ISON PRM to overcome this problem. The needed additional layers are the MAC layer and the interworking layer, which are briefly addressed. The next step taken in the process of developing and implementing interworking functions is the specification of the MEDIAN demonstrator protocol stack. In Chapter four the assumptions and the targets underlaying the development of the MEDIAN interworking layer in the MSS are treated. The approach followed in the process of developing the protocol description is subdivided into four steps. The first step defines the different scenarios in which the MEDIAN system can be operating. In the second step the message flow, concentrating on the internal MEDIAN message flow, is derived using the scenarios resulting from step 1. The third step derives from the different message flows {derived in step 2} a detailed description of the actions which have to be executed in the MSS (concentrating on the actions in the interworking layer).
9
The final step uses the results of step 3 as an input to derived a detailed protocol.description of the protocol used in the MSS interworking layer to control the call/connection establishment, maintenance, and release. The final step produces three different protocol description, which together define the protocol of the MSS interworking layer. In Chapter five the assumptions ·and the targets concerning the development of the MEDIAN interworking layer in the MPS are treated. The MPS is subdivided into a portable radio part (PRP) and an user terminal equipment (UTE). Focusing on the PRP because the UTE which is able to communicate using an optical fibre and ATM already exists. The assumptions made and the targets of the PRP are treated. The approach used in chapter four is also applied in the development of the protocol description of the MEDIAN interworking layer in the MPS. Ending with Chapter six treating the conclusions and recommendations made.
10
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
T. Uildriks 29 augustus 1996 Nauwkeurigheid ruismetingen aan ontvangers t.b.v. radio-astronomisch onderzoek prof.dr.ir. G. Brussaard ir. J. Dijk ir. G.H. Tan (ASTRON)
Samenvatting: Het afstudeerwerk is verricht bij de Stichting voor Astronomisch Onderzoek. in Nederland (ASTRON). Deze stichting heeft tot taak het astronomisch onderzoek in Nederland te bevorderen en te ondersteunen. Binnen ASTRON wordt onderzoek gedaan naar nieuwe generatie astronomische ontvanger systemen. Voor de Westerbork Synthese Radio Telescoop (WSRT) wordt op dit moment een nieuw Front-End ontwikkeld en geproduceerd. Met dit Multi Frequency Font-End (MFFE) is het mogelijk om meerdere astronomische frequentiebanden te ontvangen, terwijl tot nu toe voor elke frequentieband fysiek een ander Front-End moest worden gei"nstalleerd. Voor astronomische ontvangers is het belangrijk dat de interne ruis zo laag mogelijk is. Het is daarom ook belangrijk deze nauwkeurig te kunne meten tijdens het ontwikkel- en produktie traject. De opdracht binnen het afstudeerproject was om de nauwkeurigheid van ruismetingen te bepalen, en eventueel te verbeteren, en dit toe te passen op de 6 em hand van het proto type van het MFFE. Hiertoe is een analyse gemaakt de afwijking die op kunnen treden en hun invloed op de te meten ruisfactor. Hieruit is geconcludeerd dat voor nauwkeurige ruismetingen de "hot/cold" methode te prevaleren is boven metingen met een diode ruisbron. Andere belangrijke conclusies zijn dat de omgevingstemperatuur moet worden gemeten en dat het gemiddelde van een aantal ruismetingen na elkaar moet worden bepaald om nauwkeurige resultaten te behalen. Om ruismetingen aan het MFFE te kunne doen was het noodzakelijk een overgang van coax naar golfpijp te maken om het ruissignaal in het MFFE te kunnen inkoppelen. Om de invloed van deze adapter in het uiteindelijk meetresultaat te kunnen verwijderen, is het nodig om de "Scatter"parameters van de adapter te meten. Dit is gedaan met toepassing van een sliding load. In de uiteindelijke meting bleek een versterkingsfluctuatie veroorzaakt door de temperatuurregeling van het RF-gedeelte de meting te verstoren. Dit is opgelost door direct na de lage-ruis versterker te gaan meten. Als eindresultaat is een totale onnauwkeurigheid in de gemeten te ruisfaktor van de 6 em band (4,77-5,02 GHz) bepaald op ongeveer 0,11 dB, met een typische afwijking van ongeveer 0,5 dB.
11
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
E.E.J. Weijers 25 april 1996 Capaciteitsvergroting van het GSM-systeem m.b.v. adaptieve antennes. prof.dr.ir. G. Brussaard dr.ir. M.H.A.J. Herben ir. J.R. Schmidt (KPN Research)
Samenvatting Omdat het aantal gebruikers van het GSM-systeem met de dag groeit, zal er zich in de nabije toekomst een capaciteitsprobleem voordoen. Dit capaciteitsprobleem wil men oplossen door de huidige cellen te verkleinen, zodat de capaciteit (aantal frequenties per oppervlakte-eenheid) toeneemt. Het probleem dat zich dan voordoet is dat de verkleining van de cellen niet de gewenste capaciteitsverhoging oplevert, omdat door de niet uniforme verdeling van de gebruikers, er binnen de oude grate eel nieuwe kleine cellen zullen zijn waarin zich een capaciteitsprobleem voordoet, terwijl de daar omliggende kleine cellen nog over vrije kanalen beschikken. Verder zal het gebruik van deze kleine cellen nog andere problemen met zich meebrengen. Ten eerste zullen er meer basisstations geplaatst moeten worden. Ten tweede zullen er meer handovers plaats moeten vinden, wat resulteert in een grotere belasting van het GSM netwerk. Ten derde zal het mobiele station metingen moeten verrichten aan meerdere basisstations van omliggende cellen. Verder zal het systeem meer last krijgen van co-channel interferentie, omdat de afstand tussen de cellen met gelijke frequenties kleiner wordt. Om deze te verwachten problemen het hoofd te bieden, is er onderzoek gedaan naar het toepassen van een adaptieve antenne op het basisstation, die in de richting van iedere mobiele gebruiker een antennebundel opzet. Met behulp van deze antenne kan men in de huidige grate eel meer frequencies gebruiken zonder meer co-channel interferentie te veroorzaken. Bij dit onderzoek is gebruik gemaakt van het MUSIC algoritme. Hiervoor is gekozen omdat het aile mogelijke richtingen kan bepalen van de op het basisstation invallende radiogolven. Dit in tegenstelling tot andere algoritmen, zeals o.a. ESPRIT, die slechts eenduidig richtingen kunnen bepalen binnen een hoek van 180°. Het blijkt uit de simulatieresultaten dat MUSIC ook goed functioneert bij aanwezigheid van multipad signalen, dit in tegenstelling tot beweringen in de vakliteratuur. Dit komt door het feit dat de multipadsignalen niet coherent zijn, hetgeen bij wiskundige analyses vaak wei wordt aangenomen, maar onderling slechts hoog gecorreleerd zijn, waardoor de correlatiematrix die MUSIC gebruikt toch niet singulier is.
12
LEERSTOEL ELEKTRO-OPTISCHE SYSTEMEN
13
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
Y.L.C. de Jong 25 apri11996 A COMA based bidirectional communication system for CATV networks. prof.ir. G.O. Khoe ir. R.P.C. Wolters
Summary CATV networks are considered a prom1smg infrastructure for implementing future interactive services. Efficient utilization of the available CATV spectrum requires a communication system which is dedicated to the specific properties of these networks. This thesis presents the physical layer of a bidirectional communication· system for CATV networks which is based on COMA, a technique that has a certain robustness to the ingress found in the CATV bandwidth. It is shown that the application of ordinary, asynchronous COMA results in a very poor spectral efficiency. Therefore, a transmission scheme based on synchronous COMA is adopted. A trade-off between required SNR and user capacity results in the selection of the QPSK modulation format. Using synchronous COMA and QPSK modulation, a capacity of 64 64 kb/s channels can be achieved in a bandwidth of 6 MHz. It is found that small synchronization errors can be tolerated without significant performance degradation. Next, a detailed description is given of the cable modems responsible for implementing the proposed transmission scheme. Their design exploits the typical CATV network configuration to enable a cost-effective hardware realization. The modem design is verified by computer simulations. Simulated system performance is found to be in good agreement with the theory. Also, sensitivity of system performance to linear distortions is investigated. Especially linearly varying group delay and fluctuating amplitude response are shown to be serious causes of performance degradation.
14
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
F.J.J. Kennis 27juni 1996 The tunable three section Distributed Bragg Reflector Laser prof.ir. G.D. Khoe dr. Staring (Philips)
Summary Tunable 3 section DBR lasers are key components in coherent optical transmission systems and in Wavelength Division Multiplexing (WDM) systems. These lasers consist of a gain section and two tuning sections, respectively the PC- and the DBR section, and offer a wide tuning range while maintaining a high optical output power and a narrow linewidth, which is approximately independent of the emitted wavelength. To examine the wavelength dependence of the output power, 3 parameters of the lasers have been varied: the composition of the active layer, the composition of the waveguide in the PC and DBR sections, and the pitch of the grating. To examine the differences in the properties of the lasers with these variations, the tuning-, power-, L-1-, and linewidth characteristics have been measured. With respect to a bulk active layer, it is shown that application of Quantum Wells in the active layer results in a higher output power, a lower threshold current and a narrower linewidth. Application of a waveguide layer in the PC and DBR sections with a bandgap that is close to that of the active section yields to some amplification in those sections, compensating for the losses due to the increase in free carrier absorption if the PC and DBR currents are increased. The optimum composition of this waveguide layer is expected to be found between 01.45 and 01.48 (near 01.48 for the bulk lasers, and near 01.45 for the quantum well lasers). Modulation of any one of the 3 sections of the DBR laser by an a.c. signal results in an intensity variation in the output power (AM response) and a variation in the wavelength of the laser (FM response). The AM response of the gain section, as well as the FM responses of the PC and DBR sections have been investigated. It is shown that the AM and FM responses and their bandwidths are approximately the same for all devices. The FM responses and bandwidths of the PC and DBR sections are dependent on the pumping rate of these sections, and therefore vary with the output wavelength (high at low current, and low at high current). The DBR laser can be used as a wavelength converter. If an input signal with an input wavelength is injected into the gain section of the laser, the signal is transformed as a result of cross-gain modulation to an output wavelength, which is tunable. An important parameter is the required input power for efficient wavelength conversion. Therefore, the output power as a function of the input power is measured for a number of devices. It is shown that the required input power for wavelength conversion can be reduced by applying a 10% coating on the gain section or by reducing the gain current. Furthermore, it is shown that extinction ratio enhancement can be obtained at low gain currents. Finally, some BER measurements have been performed to demonstrate wavelength conversion using a DBR laser.
15
LEERSTOEL HALFGELEIDERBOUWSTENEN
17
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
G.J.W. Peulen Rapport nr. EEA 526 27 juni 1996 Application of Auger electron spectroscopy for the analysis of laser facet surfaces and passivated GaAs prof.dr. G.A. Acket ing. H.M. de Vrieze.
Summary This report describes the principles of Auger electron spectroscopy and how this technique can be used for the surface analysis of laser facets. These analyses are carried out by means of the Fisons VG100AX Auger spectrometer. The design and the application of this type of spectrometer are also described. Auger electron spectroscopy is also used for the analysis of a new passivation method, which has recently been developed by the research department of the Philips Optoelectronics Centre. This passivation method is applied to prevent the origin of an oxygen contamination between cleavage of the laser bars and the deposition of a protective coating. The presence of oxygen at the laser facet in combination with the high light intensities at the facet causes a descrease of the catastrophic optical damage (COS) level. This descrease of the COD level seriously limits the optical output of the laser device. It appears that the newly developed passivation method results in a descrease of the amount of oxygen present on the cleaved facet when compared with an untreated sample. In addition the passivation layer disappears when the sample is exposed to an argon sputter plasma. Furthermore it appeared that passivation substance leads to a further decrease in the amount of oxygen present on the surface after passivation.
18
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
D. de Bruin Rapport nr. EEA-528 29 augustus 1996 A new model for the MOSFET operating in saturation suitable for distortion analysis prof.dr. L.M.F. Kaufmann ir. R. v. Langevelde prof.dr. F.M. Klaassen
Summary For the purpose of circuit simulation, many different MOS models have been developed over the years. These models can be divided in models for digital applications and models for analog applications. None of these models are suitable for distortion analysis in analog circuits. Naturally in the electronics industry a need exists for a model that is suitable for distortion analysis in circuit simulators. Therefore a new MOS model suitable for circuit simulation has to be developed. The aim of this report is to describe a new physically based model for the MOSFET operating in saturation and suitable for distortion analysis in circuit simulators. This report describes some different ways of modelling the MOSFET in saturation and gives a new model that includes channellength modulation, describes the drain-voltage dependence of the threshold voltage and is suitable for distortion analysis in circuit simulators. The new model describes the drain-voltage dependence of the MOSFET operating in saturation very well. However the model lacks a physical base for describing the gate- voltage dependence, using an empirical solution the model describes the gate-source voltage dependence well. In the future, the model has to be tested for the source-bulk-voltage dependence and for the channel-length dependence.
19
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
N.G.H. van Melick 12 december 1996 RF Power Lateral DMOS transistor prof.dr. F.M. Klaassen dr. F. van Rijs (Philips Nat. Lab.)
Samenvatting Het afstudeerproject, verricht op het Philips Nat.Lab., hield zich bezig met simulaties aan laterale DMOS transistoren voor toepassingen bij lage voedingspanningen en hoge frequenties. Het afstudeerwerk bestond uit verschillende fasen. Allereerst zijn met behulp van twee dimensionale proces- en device-simulatoren de meetresultaten van de gemaakte transistoren gereproduceerd. Speciaal aandachtspunt hierbij waren de hete elektronen effecten (piek velden en substraat stromen). Daarnaast moest inzicht verworven worden in de optimale opbouw van de transistor in "RF power" operatie. Dit is verwezenlijkt door de realisatie van een koppeling tussen de 2D procesen device-simulatoren en een hoog frequent, grootsignaal simulator (MDS) met behulp van het Root-model. Aan de hand van gesimuleerde proces variaties en met behulp van deze ontwikkelde koppeling is de invloed van deze variaties op de RF power performance van de LDMOS onderzocht en is de laterale DMOS transistor geoptimaliseerd voor een voedingsspanning van 3.6V en een frequentie van 1.8 GHz.
20
LEERSTOEL INFORMATIE- EN COMMUNICATIETHEORIE
21
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begleiding:
M.A.M. van Gasteren 29 augustus 1996 Perceptive Audiocompression with the Discrete Wavelet Transform prof.dr.ir. J.P.M. Schalkwijk ir. J.E. Rooijakkers
Summary Wavelets have become a popular topic of research in the past years, with compression still one of the most well-known applications. Nevertheless this considers mainly the compression of images. The goal of this graduation project is to examine what wavelets can do for the compression of audiosignals, focussing on signals of the compact disc format. Wavelet bases can be both orthogonal and local, and are obtained by dilation and translation of two functions, the so-called mother wavelet and scaling function. The Discrete Wavelet Transform, or DWT, consists of the projection of a signal on the wavelet basis. The main advantage of the DWT is the specific frequency-dependent division of the frequency-time domain, where low frequenciesspan a longer timeinterval and high frequencies cover a shorter period of time. Dependend on the application this division can be altered using wavelet packets. For the compression of DWT-transformed signals a coding algorithm is designed that exploits the characteristics of these signals as much as possible. Six audiofiles where used to determine these characteristics, containing different types of music, castanets, and speech of both a female and a male. The frequencybands turned out to have very different amplitudes. Therefore the decision was made to code per frequencyband, to obtain more accurate quantization and to exploit maskingeffects. The presented coder uses frequencybands that approximate the critical bands and its quantization is based on the Laplace distribution. The importance of data is defined as a measure that reflects to what extent data contributes to the quality of the signal. A lower importance implies that more distortion due to compression is allowed and therefore that a higher compression can be achieved. Furthermore, as part of the coding algorithm the zeroburstcoding is designed to efficiently code the many occurring zeroes. Measuring the quality of coded audio is a well-known problem. Two kinds of measures exist: objective qualitymeasures calculate the difference between the original and coded signals, subjective qualitymeasures consist of the grades that people give to the audio after listening. The segmental signal-to-noise ratio SEGSNR was used as an objective measure, in addition with listening tests to correct the SEGSNR, because the SEGSNR turned out to be highly dependent on frequency and the type of audio. The results are very promising. The informationrates were reduced from 705 kb/s to 45-55 kb/s for the music and the castanets, and to about 30 kb/s for the speech files. At these rates the quality was 'nearly transparant'. Compared to other audiocoders the presented coder achieves a high compression maintaining a high quality, with a relatively low complexity. With an optimalisation on the speed the compression can probably be done in real-time, the decompression already is in real-time. In the future this coder can be further developed to achieve lower rates at equal quality, by the optimalisation of the current algorithm and the implementation of more knowledge of the human auditory system.
22
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhooglaar: Begeleiding:
P.F. van Gils 12 december 1996 lmplementatie aspecten van het CTW-algoritme prof.dr.ir. J.P.M. Schalkwijk dr.ir. F.M.J. Willems dr.ir. Tj.J. Tjalkens
Samenvatting In de sectie lnformatie- en Communicatietheorie van de faculteit Elektrotechniek aan de Technische Universiteit Eindhoven wordt onder andere onderzoek gedaan naar het Context Tree Weighting algoritme [1]. De Standaard floating point implementatie van dit algoritme heeft in de praktijk te maken met problemen op het gebied van complexiteit (en dus ook snelheid). Het oplossen van deze problemen was een geschikte afstudeeropdracht. In het kader van deze opdracht moest er ook een praktische toepassing ge"implementeerd worden. Deze toepassing was het comprimeren van zwart-wit plaatjes. Hiervoor is namelijk relatief weinig geheugen nodig. CTW is een algoritme dat voor elk symbool in een bepaald rijtje de waarschijnlijkheid voorspelt. Tijdens het onderzoek is een nieuwe implementatie voor de CTW methode, het integer CTW genaamd, afgeleid. Deze implementatie maakt gebruik van tabellen. Verder worden aile berekeningen uitgevoerd met behulp van integers. Hierdoor is deze implementatie zeer eenvoudig, zowel in software als in hardware. Om data te comprimeren is naast het CTW algoritme ook een arithmetische coder benodigd. De arithmetische coder maakt gebruik van de voorspellingen van het CTW algoritme. De hier gebruikte coder is gebaseerd op de step-coder [2]. De implementatie van deze coder sluit nauw aan bij de implementatie van het integer CTW. Ook hier wordt gebruik gemaakt van tabellen en daarnaast ook van integers voor aile berekeningen. Door middel van een grote verscheidenheid aan experimenten werden de parameters van het integer CTW en de arithmetische coder getest. Enkele verschillende implementaties worden onder de loep genomen: een implementatie met het integer CTW en de Krichevsky-Trofimov (KT)estimator, een implementatie met het integer CTW en een verbeterde KT-estimator, een implementatie zonder CTW (geen weging) en een implementatie met een finite state estimator. De conclusies uit dit onderzoek zijn dat het integer CTW een goede oplossing is voor de problemen van het floating point CTW. Het is gemakkelijk te implementeren in zowel hardware als software. Er is waarschijnlijk meer onderzoek nodig naar: de finite state estimator, verkleining van de tabellen en andere toepassingen. Daarnaast is niet duidelijk of de winst die behaald wordt door gebruik te maken van CTW bij de compressie van zwart-wit plaatjes, de verhoging in complexiteit rechtvaardigt. [1]
F.M.J. Willems, Y.M. Shtarkov en Tj.J. Tjalkens, "The Context-Tree Weighting Method: Basic Properties," IEEE Transactions on Information Theory, Vol. 41, No. 3, May 1995, pp. 653-664.
[2]
Tj.J. Tjalkens, Efficient and Fast Data Compression Codes for Discrete Sources with Memory, Ph.D. Dissertation, Eindhoven University of Technology, September 1987
23
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
R.R.J. Koster 17 oktober 1996 Contextaspecten van op repetities gebaseerde algoritmen prof.dr.ir. J.P.M. Schalkwijk dr.ir. F.M.J. Willems dr.ir. Tj.J. Tjalkens
Samenvatting Er bestaan twee belangrijke klassen van compressiealgoritmen. De eerste klasse bestaat uit algoritmen die aan de decoder doorgeven waar het gedecodeerde segment in het verleden te vinden is. Dit zijn de op repetitietijd gebaseerde algoritmen. De tweede klasse bestaat uit algoritmen die op grand van de context een kansverdeling maken voor het eerste toekomstsymbool en aan de hand van die kansverdeling het volgende symbool coderen. De eerste klasse algoritmen presenteren goed, gebruiken relatief weinig geheugen en zijn redelijk snel. De tweede klasse algoritmen presteren beter, maar gebruiken in het algemeen meer geheugen en werken minder snel. Een onderzoek is verricht naar mogelijkheden om een conditionering aan te brengen in algoritmen die zijn gebaseerd op repetitietijden. Hiermee hopen we een verbetering in de compressieresultaten van deze algoritmen te behalen, terwijl tegelijk de complexiteit niet veel toeneemt. Als eerste is gekeken naar een conditionering van het repetitietijdalgoritmen. Dit algoritme werkt met vaste bloklengte en codeert een blok door de repetitietijd van dat blok te versturen. Bestudeerd is nu een algoritme dat de repetitietijd conditioneert op de context. Hierbij is de contextdiepte, net als de bloklengte, constant gehouden. Voor dit algoritme is aangetoond dat het voor voldoend grate bloklengte de entropie behaalt. Van dit geconditioneerde repetitietijdalgoritme is een implementatie geschreven. Hiermee zijn metingen verricht. Deze metingen zijn op een aantal geconstrueerde bronnen en een aantal tekstbestanden uitgevoerd, bij verschillende bloklengtes en contextdieptes. Het geconditioneerde repetitietijdalgoritme geeft bij gelijke bloklengte betere compressieresultaten dan het oorspronkelijke repetitietijdalgoritme. De geheugenbeslag is wei toegenomen. Conditionering blijkt dus zinvol te zijn. Verder is er onderzoek verricht naar het ACB algoritme; een algoritme dat gebruik maakt van een variabele contextdiepte en bloklengte. Van dit algoritme is slechts een summiere beschrijving beschikbaar., die niet echt duidelijk is. Een aantal interpretaties voor dit algoritme zijn ontwikkeld. Hierbij is onderscheid gemaakt tussen twee delen van het algoritme, de indexcodering wordt gebruik gemaakt van contextinformatie. Hierbij zijn verschillende interpretaties mogelijk. Bij de overeenkomst-lengtecodering komt een aantal innovaties naar voren. Deze zijn slechts gedeeltelijk ge'implementeerd. Van de interpretaties van het ACB algoritme zijn implementaties geschreven. Als eerste zijn er metingen verricht om te bepalen welke van de interpretaties het beste presteert. Met de twee beste interpretaties zijn metingen gedaan om het verband tussen de gebruikte bufferlengtes en de rate te bepalen. Voor beide metingen is er gebruik gemaakt van zowel geconstrueerde bronnen als tekstbestanden. De compressieresultaten van de interpretaties van het ACB algoritme vallen tegen. Bij de tekstbestanden blijkt de compressie niet beter te zijn dan bij het geconditioneerde repetitietijdalgoritme. Bij de geconstrueerde bronnen presteren sommige implementaties.wel beter.
24
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
M.W. Krom 29 augustus 1996 . Coderingen voor stuksgewijs stationaire boom-bronnen prof.dr.ir. J.P.M. Schalkwijk dr.ir. F.M.J. Willems
Samenvatting Door medewerkers van de onderzoeksgroep lnformatie- en Communicatietheorie is een efficient modelleringsalgoritme ontwikkeld. Dit 'Context Tree Weighting (CTW}', kan worden gebruikt voor compactie van de uitvoer van binaire boom-bronnen. Boom-bronnen produceren symbolen, bijvoorbeeld 0 of 1, waarbij kans op een bepaald symbool x1 afhankelijk is van de in het verleden geproduceerde symbolen (... x1.~1.:r1• 1}, de zogenaamde context. Er wordt vaak aangenomen dat deze context niet meer dan D symbolen diep is. De relatie tussen de context en de kans op de symbolen kan worden weergegeven met behulp van een boomstructuur, ofwel het model. In de bladeren van de boom bevinden zich de kansen op de symbolen, ofwel de parameters van de bron. Het CTW-algoritme kan met behulp van weging op een efficiente wijze het model en de parameters van de bron schatten. Dit gaat echter tout bij bronnen die transities vertonen waarbij het model en/of parameterwaarde verandert. Dit soort bronnen noemen we stuksgewijs stationaire binaire boom-bronnen. Binnen de onderzoeksgroep is daarnaast een weeg-algoritme ontwikkeld, dat de uitvoer van een binaire geheugenloze bron met parameter-transities op efficiente wijze kan compacteren. De afstudeeropdracht omvatte het combineren van deze twee algoritmen, om zo het CTW-algoritme geschikt te maken voor stuksgewijs stationaire binaire boom-bronnen. Bij het uitvoeren van de opdracht heeft de auteur twee algoritmen ontwikkeld. Het eerste algoritme is geschikt voor stuksgewijs stationaire binaire boom-bronnen waarbij er aileen yeranderingen optreden in de waarde van parameters en het model van de bron constant is. Dit algoritme schat met behulp van . weging het transitiepatroon van de parameters van de bron. Dit geeft een lineaire geheugencomplexiteit en kwadratische rekencomplexiteit. De redundantie van het algoritme is gedefinieerd als de logaritme van de werkelijke kans op een rij symbolen ter lengte T gedeeld door de geschatte kans op deze rij symbolen. Een theoretische analyse toont aan dat het algoritme een redundantie heeft van ongeveer 0.5/og(T) per parameter plus ongeveer 1.5/og(T) per transitie van een parameter plus een constante term voor het model. Het tweede algoritme kan ook omgaan met veranderingen van het model van de bron. Dit algoritme schat met behulp van weging het transitiepatroon van de gehele bron. Het algoritme heeft een lineaire geheugencomplexiteit en een kwadratische rekencomplexiteit. De redundantie van het algoritme bedraagt ongeveer 1.5/og(T) voor elke parameter van de bron plus ongeveer 1.5/og(T) voor elke transitie van de gehele bron plus een constante voor het schatten van het model in elk stationair interval. De kwadratische rekencomplexiteit van de twee algoritmen is problematisch. Daarom zijn er dan ook een drietal gered.uceerde varianten van de twee algoritmen bedacht en geanalyseerd. De meest succesvolle variant plaatst een aantal (log(T) punten in een exponentiele verdeling over het verleden. Dit geeft algoritmen met redundanties die niet veel van de volledige versies verschillen en een enorme reductie van de rekencomplexiteit (Tiog(T)).
25
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M. Mitrea 25 apri11996 Compression techniques for video and graphics prof.dr.ir. J.P.M. Schalkwijk dr.ir. P.H.N. de With dr.ir. Tj.J. Tjalkens
Summary In this report, the design of a low complexity compression system that can both perform the lossy coding of the video signal and the lossless coding of the graphics is presented. The implemented system improves the quality of the graphics areas within the sequences containing video mixed with graphics. The target compression factor was 2. Two system architectures were taken into consideration: one where the video and graphics are stored in separate memories and a system in which video and graphics are stored jointly in the same frame data area, thus saving memory. In the first system, ADRC was used for the lossy coding of the video data, but error-free coding of the graphics elements within an image is required due to the inadmissible quantization errors introduced by the ADRC coding of graphics. The finite state machine model indicated three algorithms to be adequate candidates for the lossless coding of teletext data: contour coding, template coding and run-length coding. The other types of graphics have not been taken into account during this model-based evaluation. The best results in terms of compression (i.e. a compression factor of almost 8) versus complexity have been obtained by the run-length coding algorithm. The second system, which stores video and graphics jointly in the same frame data area, requires more complexity. A detection algorithm has been developed to distinguish between the two types of data: video and graphics. The system employs a powerful detection procedure for the graphics areas and afterwards, it includes also the neighbouring blocks in the graphics region to assure that the whole graphics area is covered within the error-free coded domain. Because of the large variety in the graphics data, run-length coding does not achieve the expected compression. Due tot the imperfections in the detection system, also video blocks are considered as graphics and thus, error-free coded. We have found that a more adequate algorithm for the lossless coding of graphics is provided by a decorrelation algorithm follwed by an arithmetical coder. Using this system, the graphics areas are error-free coded while keeping the compression factor greater than 2. A less complex variant of the system is also presented. It eliminates the graphics artifacts introduced by the quantizer used for the video compression, but is does not encode the graphics without errors. For the sequences with graphics areas a fixed compression factor of 2 cannot be any longer guaranteed, because of the lossless coding of the graphics by a variable number of bits depending on the data characteristics. A regulation system was implemented that can switch between a lossless and a lossy mode in the coding of the graphics components, when a factor 2 compression cannot be guaranteed by the new system. The switch between the two compression modes is controlled by the difference between the bit costs paid for graphics in the previous frames. In this way, a system was implemented that always achieves the desired compression, while simultaneously trying to perform the lossless coding of all graphics elements.
26
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
A. Pasic 27juni 1996 Turbo Codes and a MAP Algorithm for Byte Decoding prof.dr.ir. J.P.M. Schalkwijk dr.ir. F.M.J. Willems
Samenvatting Drie jaar geleden, in 1993, zijn turbo codes geintroduceerd als een nieuwe klasse van convolutie codes. Sindsdien, zijn ze in ruime mate bestudeerd en nu zijn er geen twijfels meer over hun prestaties. Het return-to-zero sequence concept blijkt een goed gereedschap te zijn om de gewichts-distributie van turbo codes te kunnen bepalen. In het eerste deel van dit afstudeerwerk wordt het ontwerpen van de interleaver voor een turbo code besproken waarbij return-to zero sequences van cruciaal belang blijken te zijn. Voor het decoderen van turbo codes wordt vaak het symbool maximum a posteriori (MAP) algoritme gebruikt. In het tweede deel van het project, wordt een nieuw decodeer-algoritme beschreven. Dit algoritme minimaliseert de packet-error rate. Een packet is een aantal opeenvolgende symbolen, bijv. een byte. Het voorgestelde algoritme kan dan ook gezien worden als een tussenvorm tussen het Viterbi algoritme enerzijds en het symbool MAP algoritme anderzijds. Naast het produceren van hard decisions over de packets, geeft dit algoritme ook informatie over de betrouwbaarheid van deze hard-decisions. Dit wordt soft-output genoemd. Er zijn simulaties met dit nieuwe algorithme uitgevoerd. Hieruit blijkt dat de hard-decision performance niet noemenswaardig anders is dan die van het Viterbi algoritme. Tenslotte werd de complexiteit van het packet-MAP algoritme nog onderzocht. Deze blijkt groter maar wei vergelijkbaar te zijn met die van het Viterbi algoritme.
27
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
E.W.E. Roos 12 december 1996 Complexity reduction of a Viterbi decoder prof.dr.ir. J.P.M. Schalkwijk dr.ir. F.M.J. Willems
Samenvatting In het kader van dit onderzoek zijn verschillende methoden bestudeerd om het geheugengebruik van de Viterbi decoder voor Digital Video Broadcasting (DVB) te verlagen. Een Viterbi decoder kan toegepast worden om convolutie codes te decoderen. In DVB wordt een convolutie code gebruikt om kanaalfouten te kunnen corrigeren. Een convolutie code heeft een heel eenvoudige encoder, maar het decoderen van zo'n code met het Viterbi algoritme is vaak een erg complex proces. De code uit de DVB standaard heeft 64 toestanden. Voor elke toestand is een padregister nodig dat 144 bit lang is. Bij integratie van het Viterbi algoritme op een chip nemen deze padregisters ongeveer de helft van het oppervlak in beslag. We onderzoeken een aantal technieken om het aantal padregisters van de DVB Viterbi decoder te halveren. Hierdoor kan het chip-oppervlak verkleind worden en wordt de chip goedkoper. Een eerste methode om het aantal padregisters te verlagen is het zogenaamde M-algoritme. Dit algoritme neemt de beste M toestanden op een bepaald tijdstip en verwaarloost de overigetoestanden. De bitfoutenkans is niet veel groter dan bij het Viterbi algoritme wanneer we M gelijk aan 32 nemen. Een tweede techniek die beschreven wordt is het zogenaamde complement algoritme. Deze techniek groepeert de toestand in paren en neemt uit elk paar telkens aileen de beste. De andere toestand wordt op het volgende tijdstip verwaarloosd. Hierdoor wordt het geheugenbeslag gehalveerd. Het blijkt echter dat de bitfoutenkans van dit algoritme erg groot is vergeleken met het volledige Viterbi algoritme. Deze performance kan echter een beetje verbeterd worden door gebruik te maken van een predecoder. Een predecoder maakt een ruwe schatting van de data, die vervolgens opnieuw geencodeerd wordt en daarna weggehaald uit de ontvangen data. Bij het decoderen van de zo verkregen rij komt voornamelijk de nul-toestand voor. Door deze nul-toestand steeds te bewaren, bereiken we een verlaging van de foutenkans. Er is een hard-decision predecoder onderzocht, maar ook een meer complexe reduced-state decoder. Een andere manier om het complement algoritme te verbeteren, is door toch de metrics van aile 64 toestanden op te slaan, maar slechts 32 padregisters te gebruiken. Ook dit heeft een gunstig effect op de bitfoutenkans. Tenslotte is het zogenaaamde overlap algoritme bestudeerd. Deze methode verdeelt de toestanden in 32 groepen, die elkaar overlappen. In een bepaalde volgorde wordt telkens in elke groep de slechtste toestand verwijderd. Deze techniek heeft wei een hoge rekencomplexiteit, maar de performance is dan ook redelijk.
28
LEERSTOEL ELEKTROMAGNETISME
29
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.C. Bogerd 27juni 1996 Electromagnetic excitation of a thin wire A travelling-wave approach prof.dr. A.G. Tijhuis prof.dr. A.G. Tijhuis
Rapport nr. EM-4-1996
Summary An approximate representation for the current along a perfectly conducting straight thin wire is presented and the validity of this representation is investigated. The current is approximated in terms of pulsed waves that travel along the wire with the velocity of the exterior media. At the ends of the wire, these pulses are partially reflected, with a fixed reflection coefficient and delay time. These parameters and the strength of the current that is excited directly by a voltage or plane-wave excitation are determined by comparing the approximate expression for the current at a single point with results of a numerical computation with the marching-on-in-time method. Subsequently, the traveling-wave representation for the current is used to derive an approximate expression for the electric field outside the wire that is caused by this current. This expression contains an integral over the initial pulse, which must be computed numerically, and closed-formcontributions from all reflected pulses. Although the expression obtained is essentially a far-field approximation, it turns out to be valid from distances of the order of a single wire length. Results for a representative choice of wire dimensions and pulse lengths are presented and discussed. When the pulse excites only a single natural mode of the wire, the analytical and numerical results are almost indistinguishable. For an excitation of up to ten modes, differences between both results are observed. However, the analytical approach is still accurate enough to provide a reasonable quantitative and a good qualitative insight into the electromagnetic behavior of the wire. This makes the model suitable for several "statistical" applications, such as the scattering of an electromagnetic wave by a cloud of metal wires, also referred to as chaff.
30
J
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
S. Hulshof 29 november 1996 Analysis of infite arrays of Vivaldi-like antennas prof.dr. A.G. Tijhuis dr. M.E.J. Jeuken dr.ir. A.B. Smolders (HSA)
Summary In this report, a model is presented for analysing and designing large arrays of tapered slot antennes above a ground plane. Tapered slot antennas are known for their wideband behaviour in combination with a large scan range, which makes them interesting candidates for various arrayantenna applications. The analysis is based on a rigorous method of moments that includes the exact Green's function of the period structure. Due to the periodic nature of the array, the analysis is restricted to one unit cell only. Mutual coupling and other array effects, Iinke blind scan angles, are automatically included in the analysis. The analysis is further simplified by using the equivalance theorem, which divides the original problem into two more simple problems. The model has been implemented in software (Matlab format) and is validated by considering various test cases like monopole and ipole arrays, of which results are known in the literature. Arrays of such elements can also be analysed due to the generality of the model. The agreement of the test cases with the results from the literature is very good. In this report most attention has been devoted to the investigation of so-called bunny-ear antennas. Some bunny-ear arrays are analysed and it is shown that a relative bandwidth of 50% can be obtained, where scan angles of +1- 45o are possible for all frequencies in the required frequency band. At broadside, a relative bandwidth of more than 100% can be achieved. The model is also used to investigate the SMART-L radar antenna (HSA), which consists of a very large array of folded dipoles. The results show some characteristic properties of the antenna which agree with experimental data.
31
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
W.D.R. van Ooijen 29 augustus 1996 On the feasibility of detecting fractures in artificial heart valves prof.dr. A.G. Tijhuis ir. E.S.A.M. Lepelaars
Summary The possibility of detecting mechanical defects in certain artificial heart valves is studied by investigating a simplified model configuration via computational simulations. This model consists of a perfectly conducting circular thin-wire segment embedded in a homogeneous, dispersive, dielectric medium. In this report three different configurations containing such a circular thin-wire segment are considered. In the first configuration the circular wire is completely closed; in the second configuration the circular wire is interrupted by an opening. These configurations represent a perfect and a completely broken heart valve, respectively. In the third configuration an impedance is. included into the circular wire to model a partial fracture. In all three cases, the mathematical formulation of the problem is carried out in the same way. From an integral relation for the electric field in the Laplace domain, we derive a one-dimensional integral equation for the total current along a circular thin-wire segment. This equation is referred to as Pocklington's equation. To avoid numerical differentiations, a Green's function technique is then used to derive an equivalent form which is referred to as Hallen's integral equation. In addition to the unknown current, this equation contains two homogeneous solutions with unknown amplitudes, which must be determined by imposing two additional boundary conditions. For the open loop, the current must vanish at the ends of the wire. For the closed loop, periodic boundary conditions are imposed on the Green's function. This leads to two different versions of Hallen's equation. The introduction of an impedance into the circular wire can be accounted for by including an extra term in Hallen's equation for the closed wire. Hallen's equations cannot be solved in closed form; therefore the unknown current must be determined numerically. In case of the completely closed circular wire, periodicity is used to directly invert a discretized integral equation with the aid of a Discrete Fourier Transformation. For the broken wire and the wire with a load, the Conjugate-Gradient-FFT method is applied to calculate the current along the wire. Results of the calculated current along the wire are given for the case of a Gaussian voltage pulse and for incident fields originating from an electric and magnetic point dipole, respectively. The late-time behavior of the current along the wire, when excited by a Gaussian voltage pulse or a magnetic point dipole, depends significantly on the status of the wire. To complete the analysis, we address the principle of simultaneously generating and detecting the current along the wire representing the heart valve by exciting this wire with a second circular ring. The problem is formulated in terms of two coupled integral equations for the currents on both wires. The calculations indicate that the late-time behavior of the current along the second wire (the antenna), significantly depends upon the status of the first one (the heart valve).
32
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
I.A. Vreeken Rapport nr. EM-1-1996 15 februari 1996 Analysis of infinite arrays of broadband antennas prof.dr. A.G. Tijhuis dr. M.E.J. Jeuken dr.ir. A.B. Smolders (HSA)
Summary Much research has already been done on suitable antenna-elements in phased-array antennas. Next to microstrips tapered-slot antennas seem to be very promising at this moment. A tapered-slot antenna consists of one or two pairs of protruding strips above a ground plane and possibly on a dielectric slab, where the tapering of the strips can be varied. Some of these types are the Constant-tapered Slot Antenna, Linearly-tapered Slot Antenna and the exponentially-tapered Antenna (Bunny-ear Antenna and Vivaldi Antenna). In the early days of phased-array research behaviour of the array was modelled by just adding up the contributions of all elements. In todays array-theory mutual coupling is included, which is necessary for arrays with small element-distances. Mutual coupling causes elements within the array to behave differently and therefore has a strong influence on the overall performance. An important simplification is often made if the infinite-array concept is used. In this concept the array is assumed to be so large in terms of the wavelength that adding elements at the outside of the array will not influence the behaviour of the elements near the center. If the array is assumed to contain an infinite number of elements mathematics are significantly reduced. The hybrid Greens function/Method of Moments is thought to be the best method to implement these concepts. Moreover it is a well-known method. In the course of the research standard methods for arrays of plane elements such as microstrip antennas appeared to be less suitable for protruding antennas. The Equivalence Theorem was therefore used to obtain a structure in which calculations are made easier, on the cost of extra fictitious magnetic currents. After calculation of the Greens functions and application of the equivalence theorem boundary conditions are constructed and tested. The resulting matrix equation is then solved for the unknown current on the antenna. From this current input impedance, reflection coefficient and radiation pattern are then derived. Initially, the aim of this research was to develop and implement a suitable model to analyse protruding antennas in general and the bunny-ear in particular, all for the case of antennas in air. A model has been developed and implemented. It has been validated with an infinite array of monopoles, strongly resembling results in literature. At this stage results for the bunny-ear cannot be given. The developed software is expected to be easily extended to other structures. This study must be considered as a starting study, as a basis for further research. Follow-up projects are necessary to generate results of practical use.
33
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
Afstudeerhoogleraar: Begeleiding:
F.P. van der Wilt
12-12-1996 A Second-Order FDTD Scheme for the Calculation of Electromagnetic Fields near an Interface Between Two Media EM40 prof.dr. A.G. Tijhuis ir. B.J.A.M. van Leersum (TNO-FEL) ir. J.J.A. Klaasen (TNO-FEL)
Summary The finite-difference time-domain (FDTD) method is a computational method for the calculation of the temporal evolution of the electromagnetic field in a given region of space. From its derivation from Maxwell's curl equations, it follows that the method is second-order accurate for scatterersdescribed by continuously varying material parameters. However, if the material parameters contain discontinuities, for instance at an interface between two media, the conventional derivation of the FDTD method is no longer valid. In this report, three generalizations of the FDTD scheme are derived by applying the integral form of Maxwell's equations to the spatial (Yee) grid, and approximating the integrands occuring in these equations by Taylor-series expansions. The integral form of Maxwell's equations is valid at and near adiscontinuity. In contrast with the original FDTD scheme, the new schemes are second-order accurate at straight interfaces between two lossless media. The third scheme is an improved version of the other two, and has the lowest number of terms per equation. Moreover, this version is identical to the original FDTD scheme in which the arithmetic mean of the permittivity $\epsilon$ andthe harmonic mean of the permeability $\mu$ are used. This scheme can aso be derived from the differential form of Maxwell's equations, by using Taylor-series and boundary conditions. However, second-order accuracy could not be proven from this derivation. Representative results are presented and discussed to demonstrate the Validity of our analysis. Specifically, for the 20 problem of the scattering of aelectrically or magnetically polarized pulsed plane wave at a straight interface between two homogeneous, lossless dielectric and magnetic materials.
34
VAKGROEP SYSTEMEN VOOR ELEKTRONISCHE SIGNAALVERWERKING
35
LEERSTOEL ELEKTRONISCHE SCHAKELINGEN
37
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
C.A.H. van Buel 12 december 1996 Implementation of an adaptive array on four DSP's prof.dr.ir. W.M.G. van Bokhoven dr.ir. P.C.W. Sommen ir. D.W.E. Schobben
Summary This report describes the implementation of an adaptive array on four DSP's. The goal of this project is to investigate the real-time implementation aspects of large adaptive arrays. The implementation is done using the software and hardware available in the Signal Processing Workplace. The hardware that is used to implement the adaptive array consists of four DSP's. The four DSP's are part of the PO-TIMEX modular DSP dystem. This system is based on the PD-TIM40-module, which is a DSP-module that conforms to the Texas Instruments TIM-40 specifications. The PDTIM40 incorporates the TMS320C40 DSP from Texas Instruments. The software that is used consists of two parts: a real-time operating system, called Virtuoso and development tools. Virtuoso offers support for parallel processing, portability (ANSI-C) and easy-ofuse. Virtuoso forms, together with the development tools (assembler, compiler, linker) from Texas Instruments, the programming environment. The adaptive array that is implemented consistrs of a broadband array, where the Frost algoritm is used to update the weights. The Frost algorithm is based on constrained power minimization. With constrained power minimization, the weights of the broadband array are chosen in such a way that they minimize the total output power, subject to the constraint that the gain in some look direction is fixed. The Frost algoritm is implemented using data partitioning. This way, each processor performs exactly the same function, but on different subblocks of the data. Several implementations of the Frost algorithm have been made using different communication methods. The test results show that a major bottleneck in performance is the data-transport between the four DSP's. This problem is solved by using a direct addressing method instead of the Virtuoso communication methods. The implementation of the Frost algorithm has shown that communication is an important aspect when using parallel processing, which is not well supported by Virtuoso. The expectation is that, algorithms that use block processing, can minimize this problem.
38
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
W.J.S. Hermsen 29 augustus 1996 Implementation of an Analog Neural Network using EEPROM technology prof.dr.ir. W.M.G. van Bokhoven dr.ir. J.A. Hegt dr.ir. F. Widdershoven (Philips Research) dr.ir. A.J. Annema (Philips Research)
Summary This report describes an Analog Neural Network using a new analog multiplier based on standard digital EEPROM technology. Functionality and characterization measurements have been performed on this new multiplier. Using these results, peripheral hardware for this multiplier is designed, and a programming scheme is created. With this, a small neural network is designed and layouted.
Keywords:
Neural chips, analogue storage, neural nets, EEPROM
39
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
B. Kessels 27juni 1996 Design of a normalized grey value correlator prof.dr.ir. W.M.G. van Bokhoven dr.ir. P. Sommen ir. J. Bernsen dr. M. Brok (Philips)
Summary Philips Industrial Vision (PIV), a department within Philips' Centre for manufacturing Technology (CFT}, operates in the area of industrial image processing. This department uses its Single Board Image Processor (SBIP) in various industrial image processing applications like identification-, inspection- and position measurement- systems. The normalized correlation can be used for position measurements of arbitrary objects, independent of contrast variations in images. However, it is very computation intensive: even though the usage of subsample techniques, the normalized correlation is rather slow on the SBIP. Therefor PIV needed an extension board with a digital signal processor that accelerates computation intensive algorithms like the normalized grey value correlation. Since the correlator uses subsample techniques, an investigation was carried out on the effects of subsampling and prefiltering on the correlator performance. The design of the extension board resulted in a small image processing module that can operate in parallel with the SBIP. The board offers the possibility to transfer images to and from its image memory at transfer rates up to 20 Mhz. The extension board, based on Analog Devices' ADSP 21062 DSP, improves the processing performance of the SBIP due to the high instruction rate of the DSP, the zero-waitstated image memory and the DAP's calculation capabilities. The implementation of the normalized grey value correlation on the extension board, reduces the execution time for a search of a 200 x 200 template in a 512 x 512 window from 15 seconds on an SBIP, to 125 ms on the extension board. The architecture of the extension board allows porting code from the SBIP fairly easy, thus vision functions that are currently running on the BSIP can be implemented on the extension board. Especially vision functions that perform intensive calculations and/or intensive memory usage, are expected to run significantly faster on the extension board. However, here is pointed out that in certain algorithms, assembly language programming might be required to optimally use the DSP and achieve a high performance improvement. In the analysis of the subsample effects and pre-filtering, where image was assumed to be free of noise and free of distortions, was shown that a filter can be designed that improves the correlator performance after this filter is applied to the template. This analysis is to be completed in future by analysing optimization of the filter. Furthermore, the analysis assumed rather ideal conditions, so it is to be extended in future by taking noise and possibly distortions of the object in the image also into account during the analysis.
40
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
F.M.P. Brouwers 29 augustus 1996 Design of a 10 bit D/A converter prof.dr.ir. R.J. van der Plassche, ir. P. Vorenkamp (Philips Natlab) dr.ir. R. Roovers (Philips Natlab)
Summary A circuit implementation of a 10 bit DAC with a conversion speed of 1 Gs/s with ECL-compatible inputs is presented. For the implementation of the DAC the doublepoly technology (OBIC1 00} is used. The DAC is based on a coarse-fine architecture with 5 MSBs driving the coarse part and 5 LSBs driving the fine part. The LSBs are binary weighted by means of a R-2R ladder network and the MSBs are converted into a linear thermometer decoder which controls 31 equal current cells. Also a new architecture is given for the DAC. Due to signal delay varations of the wires on the chip, the output frequency spectrum of the DAC contains harmonic distortion. The most important one is that of the internal clock delay which applies second and third harmonic components to the output (THO). A solution to this problem is to address the current cells of the coarse part in a random fashion way. With this conversion-error due to internal clock delay consists of white-noise instead of harmonic components. With a random segment decoder, this can be achieved. For this there are two possiblities, full random decoding or incremental-random decoding. Both methods increases for input signal frequencies up to half the sampling frequency, the spurious free dynamic range (SDFR) to more than 65dB. For input frequencies below 100 MHz the appended noise, due to an increase in glitcherror of the full random decoder is greater than the reduced THO. Resulting in a decrease in S/(N+THD)-ratio. With the incremental-random decoder less noise is appended, which results in a small noise-power and a small THO. For frequencies above 100 MHz t~e reducing in THO is more than the appended noise. So both randomizers increaes the S(N+THD)-ratio for these input frequencies significantly, compared to the case no random decoder is used.
41
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.B. Clink 27 juni 1996 Design of a 1 GSample/sec 8-bit folding analog-to-digital converter prof.dr.ir. R.J. van der Plassche prof.dr.ir. R.J. van der Plassche
Summary In this graduation report the design of a 1 GSample/sec 8-bit Analog-to-Digital Converter is presented. The converter is based on the folding and interpolation architecture. There are several folding architectures that have their own characteristic problems. In this report a folding principle will be implemented that does not use the amplitude information of the folding signals, but uses the zerocrossings in the folding signals. The folding signals will be made by using cascaded folding. In this way the matching requirements in the folding blocks is relaxed. The interpolation is used to omit some folding blocks. In this way a hardware reduction can be gained. The partition in the fine and coarse part will be discussed. There is search for the way the interpolation needs to be implemented. One can interpolate signals at different positions in the architecture. Interpolating at the input stage will not always gain the best area reduction. Interpolating at the end of the folding signals can be difficult at high frequencies. In the presented architecture there will be an interpolation at the first folding stage and an interpolation at the last folding stage. The comparators that are used, have been optimized for a low error probability for meta stable states. An efficient way to implement the exor function at the comparators is shown. In the decoding of the fine signals a comparator-decision-error correction is implemented. In the coarse part an error correction is also implemented that compensates for delay differences in the fine and coarse signal paths. The design includes ECL outputs and Overflow and Underflow detection. Under ideal conditions the 50d8 Spurious Free Dynamic Range can be achieved up to a 500 Mhz full scale sinewave input signal. The total power consumption is approximately 1.5W.
42
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
M.M.N. Storms 29 augustus 1996 Design of a 1o bit GSample/s Track hold circuit prof.dr.ir. R.J. van der Plassche prof.dr.ir. R.J. van der Plassche
Summary In this report the design of a 1o bit 1 GHz sample rat19 track-and-hold-circuit with a 2 V peak-topeak signal amplitude is described. The architecture on which the design is based is a single channel differential configuration in order to meet several design specifications. The choice between an interleaved and a single channel configuration is discussed. A new double-poly production process (OBIC) is used and treated, differences between the new and former processes are discussed. The track-and-hold circuit consists of five subcircuits: an input buffer, a switch with a hold capacitor, an output buffer, a clock buffer and a bias voltage circuit. All these circuits are analyzed and designed by means of simulation. The total circuit has bE~en designed with a distortion below -72 dB over a 500 MHz full-Nyquist bandwidth in order to meet process matching and other non-idealitites. Also noise is taken into account. Measurement setup has been proposed and implemented in the layout of the design.
43
LEERSTOEL ELEKTROTECHNISCHE MATERIAALKUNDE
45
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
H.G.P.H. Senten 25 apri11996 Designing a BiCMOS receiver channel for optical sensor applications prof.dr. T.G.M. Kleinpenning dr.ir. L.K.J. Vandamme
Summary A first prototype of a receiver channel for photodetector applications using amplitude modulated signals has been designed. Two identical prototype receiver channels were integrated on a single chip using a standard 1.2 11m BiCMOS technology. Each channel comprises a.o. preamplifier, voltage amplifiers, programmable feedback networks, and a switched demodulator. In this thesis, the design approach for the various receiver channel building blocks is described. Emphasis is on important design decisions, possible alternatives, and practical circuit and layout solutions. The operation of the receiver channel is to extract the slowly varying information carrying signal from a modulated carrier which is accompanied by relatively high levels of noise. Here, the modulated carrier is the output current of a photodetector. Amplification, demodulation, and filtering are necessary in order to obtain a slowly varying DC voltage proportional to the amplitude of the information carrying signal. As a whole, the receiver channel can be characterised as a narrow band filter around the frequency of interest. The preamplifier is designed to operate with a high impedance source, e.g. a PIN photodiode. A summary of receiver channel performance includes: input reduced noise current spectral density 0.15 pA!Hz measured at 1 kHz, total channel transimpedance between 9 Mn and 866 Mn digitally programmable in four discrete steps, lower -3 dB cutoff frequency 100 Hz, upper -3 dB cutoff frequency 25 kHz. The maximum voltage swing that can be obtained at the demodulator output is 2.4 V. These simulated performance characteristics meet the requirements imposed by applications including smart structures for stress and strain measurements, medical sensing, position measurement, remote sensing and controlling. Based on simulation results, the noise performance and sensitivity of the integrated prototype receiver channel are comparable to existing channels, implemented using discrete components. The reduction in size and required power are important improvements for multichannel applications, mounted in a small casing. If the specifications concerning silicon area and power consumption allow it, noise and sensitivity characteristics can be enhanced further. Indexing terms: analog signal processing, optical receivers, low frequency applications, IC design.
46
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J. Briaire 27juni 1996 Ruis in anisotrope magnetoresistieve NiFe elementen prof.dr. T.G.M. Kleinpenning dr.ir. L.K.J. Vandamme dr. M. Gijs
Summary During a period of seven months the noise of structured NiFe (permalloy) films has been investigated at Philips Research Laboratories in Eindhoven. Especially the differences in the noise between single domain samples and multi domain samples had our attention. All measurements were conducted at room temperature. The only two types of noise present in these samples are! 1/f noise and thermal noise. The 1/f noise power appears to be extremely sensitive to external magnetic fields and easily varies a factor 10 of more as a function of the used fields. For samples which are prepared in such a way that they contain domain walls, the 1/f noise power seems to be related to the magnetic field in the same way. But the noise power can now vary a factor 100 or more. The 1/f noise also shows strong scattering when domain walls are present. Changes in 1/f noise power can be explained qualitively by comparing them with the variance of the resistance as a function of the angle between magnetization and current. This shows that the 1/f noise is also related to the square of the first derivative of the resistance with respect to the external magnetic field. Using a simple energy model, the varianGe can be reasonably except for a scaling factor. The parameters used for this calculation are based on a fit of the calculated resistance with the measured one. If domain walls are present, the energy of the system can be changed drastically if a domain moves. This can explain the differences in noise between the single domain structures and the multi domain structures. In order to be able to measure a sample thoroughly two computer-programs have been written. The first can measure the noise as a funcion of a magnetic field and the second one determines the noise parameters of a noise spectrum.
47
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
R.H.J. Grosfeld 25 april 1996 Addition of checks and developing testchips prof.dr. T.G.M. Kleinpenning dr.ir. L.K.J. Vandamme ir. H. Casier (Mietec Brussel)
Summary The final year's assignment is done at the company Alcatel Mietec in Brussel. This is a company that designs and produces integrated circuits for semi-custom. The assignment consists of two parts. First a software needed to be written to do extra checks. The second part is designing and making layouts of several testchips. At the company they used an old simulator program, which did some extra checks on the circuits. This program generates warnings or errors when necessary. Nowaday they use a new simulator program that did not have this feature. My task was to make it possible that the new simulator also can do these extra checks. For reasons of measurements in an automatic way, the testchips must be measured easily and quickly. They are mainly mend for measuring the matching properties of resistances and MOS transistors. Especially there was looked for the influence of th.e layout-design on the matching properties. An other testchip is made with digital circuits for looking at the influence of long internal connections to the delay time. After measurements it looks like the PMOS transistors have a matching behaviour as expected. The NMOS transistors have a matching behaviour whereby a systematic mismatch occurs. This is probably caused by the way the layout is made. The fact that only the NMOS transistors have a systematic mismatch is a new result for the company. More investigations are needed to find the cause of this systematic mismatch by the NMOS transistors.
48
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
K.J.P. Macken 15 februari 1996 Recommendations for industrial distribution system design taking into account power quality. prof.dr. T.G.M. Kleinpenning dr.ir. L.K.J. Vandamme dr.ir. M.H.J. Bollen {UMIST Manchester)
Summary This project copes with the features of different distribution arrangements to supply an industrial customer from a power quality point of view. Subdivision of supply arrangements from a power quality point of view is as follows: radial system is characterized by long sustained interruptions {several hours to days), redundancy through manual switching results in short sustained interruptions {one hour and longer); redundancy through automatic switching gives momentary interruptions {one second and longer); and redundancy through parallel operation gives voltage sags {less than one second). Criteria such as sustained interruptions, voltage sags, fast voltage transients, and harmonics are of much importance in this research. A method has been proposed to calculate a correction term in failure rate, which copes with the influence of multi-component failures, such as common-mode failures, weather effect failures, dependent failures, and system operation failures in the parallel part, i.e. that part in the system with redundancy. An assessment has been made of sustained interruptions as well as of voltage sags related to a specific arrangement. More research is needed to reveal the influence of the network design on transients' proliferation and harmonic resonance. Based on these findings several recommendations are formulated which should be considered to improve the quality by designing a network. Reliability studies may improve the selection of system design strengthened by decisions based on quantitative data rather than on intuitive and qualitative decisions. It is possible to reduce the number of outages by extra investments in system design. It has been shown that systems with local generation have a comparable reliability compared to systems with redundancy provided by the public supply. In the future it can be interesting to design plants with their own power generation system. An assessment method was given to predict the voltage sag characteristics in different distribution networks. In future, it is interesting to develop an automated system for distribution system design regarding to a required level of power quality at the load. This automated system can be integrated in a complete design expert system.
49
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
M.E.P. van der Steen 29 augustus 1996 Gummel Poon DC and noise parameter determination prof.dr. T.G.M. Kleinpenning dr.ir. L.K.J. Vandamme
Summary The Gummel Poon model is a model that describes the function of the bipolar transistor. The DC part describes the collector and base currents as a function of the applied voltages, VBE and VcE· These functions contain parameters and these can be obtained by measuring the transistors under certain useful bias conditions and determining a slope, an intersection point etc. of the graphics that are obtained {see [1]}. The noise part describes the noise in the base current and is assumed to be composed of shot noise and white noise {in the Gummel Poon model}. In this work the DC and noise parameters of bipolar transistors were determined and it was done for the company Alcatel in Brussels, Belgium. A new program, UTMOST, was used to measure the bipolar transistors and also to determine the parameters out of the graphics that were measured. The transistors were supplied by Alcatel, three vertical NPN types of different geometries and two lateral PNP types with a small difference in the base length. The program UTMOST has the possibility to measure, extract from the measured curves and simulate with the extracted results. Observation learned that optimisation was needed for some parameters, since there were differences between the measured and simulated results. This is because the extraction method of UTMOST is not depending on earlier extracted parameters and because parasitic PNP transistors with the substrate as one terminal, are not included in the model but in reality present. Optimisation of only a few parameters was needed to get better simulated results {i.e. close to the measured results}. Finally five different set ups for the different transistors were developed, set ups that automatically measure the transistors, extract parameters and optimise were needed in order to have at last a complete DC model. The noise parameters were measured manually and tha parameters determined. Also, the base resistance for one NPN transistor is determined using the1/f noise in the collector current. Reference: [1] Getreu, I; "Modeling the bipolar transistor", Tektronix Inc., Beaverton, 1970.
50
LEERSTOEL SJGNAALBEWERKING
51
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.P.A.M. Derksen Rapport nr. ESP-18-96 17 oktober 1996 IIR modeling of acoustic impulse responses prof.dr.ir. W.M.G. van Bokhoven {a.i.) dr.ir. A.C. den Brinker ir. H.J.W. Belt
Summary Echo cancellation in hands-free communication has been a widely investigated problem. Rooms in general, have impulse reponses of several of thousands samples unequal to zero, when sampled at 8 kHz. A FIR {Finite Impulse Response) model of a room would lead to thousands of parameters that would have to be adapted within the sample time when the conditions in a room change {persons walking around etc.) This report aims to model the room impulse reponse with an IIR {Infinite Impulse Response) filter bank. One might expect IIR filter banks to approach the psychical problem better than FIR filter banks. A special realization of an IIR filter bank, the Kautz filter bank is chosen for this purpose. The complexity of the problem is researched with the aid of a physical model and the Gabor transform. Several modeling methods are investigated to test their suitability to model these room impulse reponses. Also, to the problem dedicated methods have been tested on their performance. From the investigated method, the Prony method shows the best results. 844 poles were found with a suppression of 14 dB {without considering the adaption process). This in contrast to the FIR, which needed 1200 coefficients.
52
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
E.G.T. Jaspers 25 april 1996 Advanced luminance and chrominance sharpness enhancement for TV-applications prof.dr.ir. W.M.G. van Bokhoven (a.i.) dr.ir. M.J. Bastiaans hr. de With (Philips)
Summary Sharpness is one of the most important factors in the perception of image quality. Sharpness is a subjective attribute which is determined by the human visual system. The perception of sharpness for luminance and chrominance are different. Luminance can be subjectively made more sharp by adding overshoot to the edges in the image. However, problems do occur when doing that. The techniques for adding this overshoot to edges, is very noise sensitive. Another problem is that the overshoot can be exaggerated, which is conspicuously visible. Furthermore, clipping of the signal, due to the exceeding of the pixel range, may cause aliasing components that are very annoying. Chrominance can only be improved by making the color edge more steep, since additional overshoot does not contribute to sharpness improvement. Problems that arise with the different techniques which can do this improvement are: annoying jitter at color edges and deformation of the these edges when the color signals are processed in combination. There are several techniques to establish two-dimensional sharpness improvement. Peaking, crispening, sharpness unmasking and statistical differencing are techniques considered for the luminance signal which are based on adding overshoot. The peaking adopted for sharpness enhancement of the luminance is based on convolution of a small kernel with the image. Also for this algorithm the above mentioned problems or artifacts arise. For this reason, the output of the convolution kernel is suppressed adaptive to the image, when the artifacts occur. This is performed locally within the image, so that sharpness remains as much as possible. To reduce the noise sensitivity, two techniques are used. The first one is to adapt the amount of overshoot to the amount of overshoot perceived by the human visual system. For equal sharpness improvement, noise sensitivity is reduced. The second technique reduces sharpness at the places that do not contain edges. In these regions, where noise is most annoying, the noise boosting is suppressed. Sharpness decreases somewhat for noisy images, but it remains for noiseless images, because it is adaptive to the amount of noise. This suppression technique is adapted to the noise level gradually. An exaggerated amount of overshoot is prevented by measuring the local steepness of the edges. Steep edges would be added with a considerable amount of overshoot which would be too much. The steepness measurement suppresses the enhancement on the concerning positions. Non-linear behaviour can result in additional frequencies that lie outside the Nyquist criterion. The new components are folded back into the lower frequency band (aliasing). Clipping which is necessary to limit the pixel values, causes the most annoying aliasing. The only way to solve this problem, is to prevent the clipping. This is performed by counting the number of clippings that would occur in a sub-local region and next by suppressing the enhancement when the number of clippings is large. Sharpness does decrease for these sub-local regions, but the overall quality is still improved significantly. To improve the steepness of edges in the color signals, the pixels of the edges are reordered. To determine the position of a pixel on the edge, the derivative and the second derivative are analyzed. In this way it is known on which side of the centre of the edge the pixel is located and whether the edge is descending or rising. Because the displacements of the pixels are not allowed to exceed the range of the width of the edge, first the boundaries of the edge have to be determined. The major problem of color transient improvement is an accurate determination of the pixel positions, because inaccuracy immediately leads to an annoying jitter at the edges. The algorithms have been tested by computer simulations. The best results were obtained with adaptive solutions.
53
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
A.J. van Leest Rapport nr. ESP-14-96 29 augustus 1996 Modulated filter banks based on IIR filters protdr.ir. W.M.G. van Bokhoven (a.i.) dr.ir. A.C. den Brinker ir. J.H.F. Ritzerfeld
Summary Subband filtering is a well-known technique with many applications. These filter banks are usually based on FIR filters. However, it is well-known that IIR filters give a larger freedom than FIR filters when approximating a transfer given a certain complexity. This report considers filter banks which use stable and causal IIR analysis and synthesis filters. The start of this study was a recent publication on perfect reconstruction two-channel filter banks. This two-channel filter bank showed a certain asymmetry in its amplitude transfer. It is found that more symmetrical solutions exist, but at the expense of lower stopband attenuation. The concept of the two-channel filter banks unfortunately cannot be extended to the more general case of M-channel filter banks. Therefore, a new filter bank is proposed based on IIR filters. In this type of filter bank the 2M analysis filters are derived from a prototype filter by exponential modulation. This filter bank has not the perfect reconstruction property. However, the phase distortion, magnitude distortion and the aliasing can be made very small with a very low complexity. This filter bank is not maximally decimated but oversampled by a factor 2. However, the overall complexity of the filter bank is still smaller than that of a filter bank using an FIR prototype with comparable amplitude transfer.
54
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
Rapport nr. ESP-20-96 S.A. Timmer 12 december 1996 Using time-frequency transformation techniques in condition monitoring prof.dr.ir. W.M.G. van Bokhoven (a.i.) dr.ir. M.J. Bastiaans ir. A.J. Smulders (SKF-CM)
Summary The maintenance organization wants to schedule their machine maintenance. Accurate information about the condition of the machine at a very early stage of a defect development, enables the maintenance department to let maintenance take place in a scheduled maintenance stop. Increasing demands in condition monitoring on vibration analysis tools result in continous resaerch on improving these tools and expanding the signal classes that can be detected and represented. In an early stage, impacts of the defect occur not always regularly and are not necessary stationary. A frequency decompisition with a Fourier Transformation can hide the defect information by averaging the defect over multiple frequency components. Signals with a non-regularly and nonstationary character are difficult to represent in a complete Fourier spectrum. Therefore, research has been done in using time-frequency representation techniques, which are able to represent non-stationary signals. So, for a group of vibration signals these techniques can be useful. The time-frequency representations that have been investigated are of the Cohen Class. The discrimination of impulsive signals is in these time-frequency representations very good. The definition of the Cohen Class in the ambiguity domain makes it possible to filter out interference components. Filtering is done by laying a kernel window over the ambiguity domain. Three methods of constructing a kernel are treated. The constructed kernels are signal dependent and designed to suppress optimally cross-components that interfere with the auto-components of the representation. Simulations are performed on actual monitored machine defect vibration signals. The threee kernel construction techniques are applied on the actual monitored signal to get the kernels and representations in the time-frequency domain. In practice the"1/10" kernel construction technique did not work at all. On the hand simulations of time-frequency representantions of vibration signals with a radial kernel show that it is possible to focus on specific properties of bearing vibration signals. The on-line technique, which uses adaptive kernels based on the radial kernel construction technique, needs further research. Specific areas for research are given to enhance the on-line technique for vibration signals.
55
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
E. Wachelder Rapport nr. ESP-1-96 15 februari 1996 De toepasbaarheid van wavelets op het gebied van beeldbewerking voor copiers prof.dr.ir. W.M.G. van Bokhoven (a.i.) dr.ir. M.J. Bastiaans
Samenvatting In een copier doorloopt de informatie, op het traject van origineel tot kopie, diverse stadia waarbij conversies en bewerkingen plaatsvinden. Een gemengde signaalbeschrijving kan mogelijk voordelen bieden, door de combinatie van lokalisatie in het spatiele domein en lokalisatie in het frequentiedomein. De wavelet-transform biedt een dergelijke signaalbeschrijving, met bovendien analyseeigenschappen op verschillende schalen, wat nauw aansluit bij de werking van het human visual system (HVS). Een afstudeeronderzoek van 6 maanden moet duidelijkheid scheppen over de toepasbaarheid van de wavelet-transform op het gebied van beeldbewerking voor copiers. Een efficiente implementatie van de wavelet-transform is gerealiseerd in de vorm van een filterbank, waarvan de filters via dyadische schaling afgeleid zijn van een set basisfilters. De filterbankimplementatie kent twee kenmerkende vormen. In de eerste vorm wordt de dyadische schaling gerealiseerd door downsampling toe te passen op de subbanden, met als voordeel dat de data-omvang van het getransformeerde beeld gelijk blijft aan de omvang van het oorspronkelijke beeld. In de literatuur zijn enkele ontwerpmethodieken gevonden die perfecte reconstructie bieden, ondanks de door de downsampling ge"introduceerde aliasing. Nadelen van het toepassen van downsampling zijn de afname van de positie-nauwkeurigheid en het optreden van aliasing-artefacten bij versterking van een bepaalde subband. De tweede vorm van de filterbankimplementatie maakt gebruik van upsampling van de filters, waardoor geen aliasing ge"introduceerd wordt. Hierdoor ontstaan extra ontwerpvrijheden om perfecte reconstructie te garanderen. In veel van de in de literatuur gevonden toepassing van wavelets op het gebied van beeldbewerking, worden deze extra vrijheden aangewend om gradient-eigenschappen te realiseren. Met dit type wavelets wordt een decompositie van een beeld verkregen in randovergangen op verschillende schalen. De banden van dit type wavelet-transform worden in dit verband ook wei multiscale gradienten genoemd. De gevoeligheid voor randen en het multiscalekarakter sluiten nauw aan bij de werking van het HVS. De relevante beeldinformatie blijkt voornamelijk geconcentreerd te zijn in de maxima van de multiscale gradienten. Een in de literatuur gevonden toepassing in compressie Ievert zo een compressiefactor tussen 30 en 40, afhankelijk van de beeldinformatie. De maxima leveren op aile schalen de exacte randposities (een pixel brede randen), onafhankelijk van de amplitudes van de randen. Dankzij de onderlinge samenhang van de schalen, kan elke overgang in het beeld gekarakteriseerd worden aan de hand van de bijbehorende maxima op verschillende schalen. Hiermee · kan op grond van slechts twee parameters een maatstaf gegeven worden voor de aard van een overgang, wat aantrekkelijke mogelijkheden biedt voor segmentatie van informatie-soorten. Dankzij de overeenkomsten tussen de werking van het HVS en van de multiscale gradienten, geven de maxima weer, welke randinformatie op de verschillende schalen gedetecteerd wordt door het HVS. Hierdoor wordt het mogelijk, om voor het HVS relevante beeldinformatie selectief te bewerken, bijvoorbeeld ten behoeve van opscherping, contrast-enhancement en noise-shaping. Een nadeel van de multiscale gradienten is het optreden van redundantie: elke band heeft hetzelfde formaat als het oorspronkelijke beeld. De informatie in de maxima is echter voldoende om de overige punten via een projectie-algoritme te reconstrueren. Door aileen de posities en de waarden van de maxima op te slaan, kan een aanzienlijke datareductie bereikt worden. Het is niet tot een implementatie van het projectie-algoritme gekomen; de realiseerbaarheid wordt echter getoond door de talrijke applicaties in de literatuur (compressie, contrast-enhancement, ruisonderdrukking).
56
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
P.P.M. Bongers Rapport nr. ESP-7-96 27 juni 1"996 Analyse en metingen aan niet-lineaire detectiesystemen prof.dr.ing. H.J. Butterweck prof.dr.ing. H.J. Butterweck
Samenvatting Radio-detectiesystemen zijn niet weg te denken uit de samenleving. Wie kent niet de labels aan artikelen in sommige winkels om diefstal te voorkomen. Een speciale poort aan de deur van de winkel detecteert de labels en ontmaskert op deze manier een dief. Bij gekochte artikelen worden de labels verwijderd. In de bibliotheek worden soortgelijke anti-diefstallabels in boeken geplakt. Geleende resp. teruggebrachte boeken worden gedeactiveerd resp. gereactiveerd met speciale apparatuur. Een verdergaande ontwikkeling ligt op het gebied van radio-identificatie, waarin verschillende labels onderscheiden kunnen worden. In het afstudeerwerk is een variant van radiodetectie onderzocht. Het uitgangspunt is een normale label, bestaande uit een resonantiekring met lineaire elementen. Hieraan wordt een niet-lineair element toegevoegd, bijv. een diode. Dit om aan de frequentieselectiviteit nog een tweede selectiecriterium toe te voegen. De eigenschappen van de label worden geanalyseerd. Speciaal wordt er gekeken naar de voor- en nadelen van een niet-lineair label in vergelijking met een lineair label. De label wordt vervolgens gerealiseerd en in een hiervoor ontworpen opstelling doorgemeten. De meetgegevens worden vervolgens met de analyse vergeleken. Verder zijn nog labels met meerdere resonantiekringen op dezelfde wijze onderzocht. Een tweede belangrijk onderdeel van het afstudeerwerk is het verzamelen van artikelen die over het onderwerp van radiodetectie en -identificatie handelen. Van deze artikelen is een overzicht geschreven.
57
VAKGROEP MEET & BESTURINGSSYTEMEN
59
LEERSTOEL METEN EN REGELEN
61
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M.G.M. Bloemen 25 april 1996 Vision toegepast bij onderzoek naar biologische gewasbescherming. prof.dr.ir. P.P.J. van den Bosch ir. N.G.M. Kouwenberg ir. J. Smit {Adimec BV}
Samenvatting In het kader van het EntoTrack project is er apparatuur ontwikkeld voor het automatisch meten en registreren van posities en bewegingen van insekten, hetgeen van belang is in onderzoek in de entomologie {insektenkunde} voor biologische gewasbescherming. Bestudeerd wordt het gedrag van insekten onder invloed van lokstoffen. De meetgegevens betreffen positie, vorm kleur en andere objecteigenschappen. Bij het verwerken gaat het om het identificeren, klassificeren, tellen en volgen van objecten. Voor de bedoelde apparatuur wordt een antwerp voorgesteld waarin onderdelen als een camera, een framegrabber, een infrarood flitser en een optisch filter worden beschreven, die samen een zogenaamde demonstrator vormen. Deze dient geschikt te zijn voor: het tellen en onderscheiden van objecten in het laboratorium en later bij productie van biologische bestrijdingsmiddelen, het meten van de effectiviteit van vallen door middel van laboratoriumproeven en veldproeven, het volgen van sluipwespen bij korte en lange afstand vliegproeven, de softwareontwikkeling voor specifieke beeldverwerking en de toepassing van snelle hoge resolutie camera's. De keuzecriteria waaraan de onderdelen dienen te voldoen betreffen: algemene inzetbaarheid voor de doelen van aile deelnemende onderzoekpartners, het bestrijken van een groot blikveld, met behoud van spatiele resolutie, om kleine objecten te kunnen volgen, een snelle beeldopname, weergave en verwerking, in verband met het bepalen van afgelegde pad van snelle objecten, . mogelijkheid om dag en nacht opnames te kunen maken, objecten dienen ook 's nachts zichtbaar te zijn met behulp van verschillende spectrale eigenschappen van objecten moet deze te klassificeren zijn. Na het uitvoeren van praktijkonderzoeken is gebleken dat het optisch filter en de camera aan de gestelde eisen voldoen. Met behulp van een goed gedefinieerd filter is het mogelijk om onderscheid te maken op basis van kleureigenschappen. De camera is snel genoeg voor het volgen van de snelste objecten en de resolutie is voldoende om de kleinste obecten te kunnen waarnemen. Echter, de framegrabber blijkt niet over voldoende rekenkracht te beschikken. In het vervolg van dit afstudeerprojekt wordt onderzoek gedaan naar de mogelijkheden van een op PCI gebaseerde framegrabber. Ook de infrarood flitser voldoet niet, hij Ievert te weinig lichtvermogen. Omdat overdag, met voldoende zonlicht wei genoeg vermogen aanwezig is, wordt er geen verder onderzoek naar de infrarood flitser gedaan. De activiteiten zijn verricht bij de firma ADIMEC·Adva'nced Image Systems B.V. te Eindhoven.
62
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.H.M. Bijen 29 augustus 1996 Chaotic Disturbance Reduction Using Neural Networks prof.dr.ir. P.P.J. van den Bosch dr.ir. A.A.H. Damen ir. P. Houtkamp
Summary This report describes the investigations done in the area of disturbance reduction. Particularly, the disturbance is considered chaotic. This type of process dynamics has become more and more topic of research in several scientific fields. The residuals formerly considered stochastic turn out to be essentially deterministic. Capturing of the chaotic dynamics enables the prediction of future values which . can be used to design a proper control law to reduce the disturbance in the process outputmeasurements. The determination of the chaotic dynamics is done with the aid of a neural network. Single layer feedforward neural networks have proven to be powerful! nonlinear identifiers but require extensive computational effort for optimization of the cost function. Several simulations are carried out using SIMULINK and MATLAB to show that disturbance reduction is possible. The work has been carried out at the Measurement and Control Group of the Department of Electrical Engineering at the Eindhoven Universty of Technology in the period October '95 - August '96.
63
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.G.A. van Heeswijk 12 december 1996 Filtermethoden voor kleur kopieren prof.dr.ir. P.P.J. van den Bosch ing. W.H.A. Hendrix ir. R.M. van Strijp (Oce Nederland)
Samenvatting Teneinde kleuren documenten te reproduceren, op bijvoorbeeld digitale kopieerapparaten of printers, moet het origineel vele transformaties ondergaan voordat een reproduktie van hoge kwaliteit kan worden verkregen. Een belangrijke transformatie is de filtering; deze zorgt voor het onderdrukken van ruis en het (perceptueel) opscherpen van het beeld. ln_dit verslag worden verschillende van deze zo genoemde "filtermethoden" onderzocht. Allereerst wordt een overzicht gegeven van de eisen die aan de filtering gesteld worden en de plaatsing van de filtering binnen het kopieer/print proces. Vervolgens worden architecturen voor filtering, structuren voor filtering en enige filteralgoritmen behandeld. Teneinde de kwaliteit van de kopie te verbeteren zijn verschillende filtermethoden onderzocht en vergeleken met de referentie filtering (begin afstudeerperiode). Bij de referentie filtering wordt het gehele beeld opgescherpt. Experimenten hebben aangetoond dat het opscherpen van het gehele beeld tot ruiserigheid in vlakken leidt. Bij de nieuwe filtering worden vlakken versmeerd en randen opgescherpt; dit Ievert (perceptueel) een scherper beeld op. Om schakeleffecten tussen randen en vlakken te onderdrukken wordt gebruik gemaakt van zachte filtering (zachte omschakeling tussen randen en vlakken). Deze zachte filtering stelt hoge eisen aan de segmentatie (rand/vlak informatie). Te weinig als rand zien betekent veelal detailverlies en teveel als rand zien introduceert ruiserigheid.
64
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: . Begeleiding:
W.A. Hendriks 29 augustus 1996 Robot motion planning with APF prof.dr.ir. P.P.J. van den Bosch ir. P. Dunias
Samenvatting Het bepalen van een door een robot te volgen pad kan op verschillende manieren gedaan worden. Hierbij is het natuurlijk de bedoeling dat de robot eventuele obstakels in de omgeving vermijdt. Een van de manieren is het gebruik maken van kunstmatige potentiaal velden. Deze kunstmatige velden markeert de obstakels in de ruimte van de robot met potentialen. Door de robot het potentiaal veld te Iaten volgen kan de robot een opgegeven doel vinden. Dit verslag behandelt de theorie van pad plannen met kunstmatige potentiaal velden. Hierbij wordt ook de methode uitgelegd, die gebruikt wordt voor het berekenen van deze potentiaal velden. Daarnaast wordt in dit rapport de theorie uitgelegd van multi-dimensionale joint-ruimtes en de aspecten die hierbij horen met betrekking tot de gebruikte methode voor het oplossen van een grenswaardeprobleem. Enkele ontworpen algoritmen worden uitgelegd en bewezen. Aan de hand van deze theorie en de algoritmen is een programma geschreven voor het berekenen van potentiaal velden in de jointruimte. Dit programma is flexibel geprogrammeerd. Dit betekent dat het programma potentiaalvelden kan uitrekenen voor robots met verschillende vrijheidsgraden.
65
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
P.E. Kamphuis 17 oktober 1996 The dSPACE system: A development system for fast controller implementation prof.dr.ir. P.P.J. van den Bosch ir. R.J.A. Gorter ing. W.H.A. Hendrix
Summary The Measurement and Control Group at the Eindhoven University of Technology received the development system for control applications from dSPACE gmbh. This system is especially designed for rapid implementation of control structures for testing and tuning. The capability of the system is enlarged by a large set of development tools and function libraries. The dSPACE system is bought with the objective to provice a powerful! system for developing high speed controllers demanding high accuracy. With the system it is possible to automatically generate code for a Simulink model of the controller. It is possible to specify the needed 1/0 capabilities of the dSPACE system in the Simulink model. The generated code is used as the controller application running on the digital signal processor which is the center of the hardware configuration. With the development tools it is possible to alter parameters of the controller even while it is in running on the DSP. Giving the control engineer the possibility to adjust the control parameters while it is controlling the desired process. It is also possible to measure and visualize the behaviour of variables in time. This data acquisition is performed in real-time and the sample frequency of the running application. The data capturing can be performed in one single capture of free running. It is also possible to perform the data acquisition based on a level triggered base of one of the signals. The capture interval can be specified around the trigger moment making it possible to perform preand posttriggering. In this report the configuration of the dSPACE system is described along with the available development tools. The automatic code generation for designed controllers is also described, this covers also the structure of the generate code and the methods that can be used to extend the functionality of the automatically generated controller implementation. The whole development process is shown using the design of a master-slave synchronization for two motores. The objective of this control task is to synchronize two load axis each driven by a motor. Both the axis must revolve with the same position displacement of the axis, this implies that both axis must have the same angular velocity. For this control task three approaches are taken, two cases with synchronous controllers and one with an asynchronous controller. The first synchronous controller is used as a reference for the performance of the two other methods. It uses the position encoders of the motors with full resolution. The second synchronous coltroller uses a limited resolution of the position encoder of the slave motor. The third controller is an asynchronous controller. In this solution is also a limited resolution of the slave position encoder used, but the controller is triggered by the encoder pulses of the slave encoder. This results in a control action that is asynchronous in time. The results of these three solutions for the master-slave synchronization is also presented in this report.
66
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleider:
J.H.H. Kop 29 augustus 1996 A Magnetic Levitation System prof.dr.ir. P.P.J. van den Bosch dr.ir. A.A.H. Damen
Summary This report describes the development of a magnetic levitation system. This project was meant to be a start to realize a magnetic bearing. In a magnetic levitation system a ball is being levitated under an electromagnet. To keep the ball levitated we must be able to measure the position. Therefor an inductive sensor has been realized. The inductive sensor was used to be able to sense and actuate on the same coil. The inductive sensor appeared to be reliable and accurate as long as the actuating currents were limited in both frequency and amplitude. Moreover a current amplifier was developed. As a coil is an inductive loas the current amplifier had to be limited in frequency. The final results of the current amplifier were satisfying. Finally a controller was developped to stabilize the levitation system. As a magnetic levitation system is a nonlinear system with parameter uncertainties this is not easy. In practice we realized a PD-controller and a PID-controller that were able to levitate the ball. However, both controllers were very sensitive to external disturbances. Therefor it deserves attention to develop a more robust controller and use techniques like exact linearization. Finally we may state that a magnetic levitation system was realized that is able to keep a ball levitated. However, the system has a lot of limitations, and a lot of improvements need to be done to be able to realize a magnetic bearing.
67
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.H.M. Lemmers 29 augustus 1996 Evaluation of real-time operating systems, development methodologies and CASE-tools prof.dr.ir. P.P.J. van den Bosch ir. C.A.M. van de Berkel ir. J.W.J.J. Beckers
Summary The key-word of this master thesis is time. The first part of the master thesis is an exploration of real-time operating systems and real-time software engineering. The second part covers the real-time behaviour of Windows NT 3.51 and the development of a measurement and control application. A real-time system is a system which has to response predictable and within time-constraints to extern events. An embedded system is a computer system, which is a part of a bigger system and plays a role essential to the functioning of this bigger system. A real-time operating system is a operating system, designed to provide for the user-applications to meet their time-constrains. Real-time operating system are often very scaleable, optimised on speed, predictable. This master thesis discussed the real-time operating systems QNX, VxWorks, VRTx and iRMX. A result of this survey is that the differences between the operating systems are more in the field of the development tools then in the field of architecture and performance. Real-time software engineering introduces extra aspects, comparing to 'ordinary' software engineering. Additional aspects are time-constraints, complex behaviour, verification, validation and simulation. A closer look has been taken at structured development methodologies, functional oriented (SA/SD by Ward/Mellor and SOL) or object oriented (Booch, OMT by Rumbaugh, Coad/Yourdon, Shlaer/Mellor). It is almost impossible to exploit all the advantages of a structured methodology, is no CASE-tool is used. A CASE-tool is a tool to record, verify and validate produced system specifitations and system architecture. Furthermore a CASE-tool is able to support developers by generating documentation and generating automatically code. The master thesis discusses several CASE-tools. These tools are common tools and tools specialised in real-time software development. The last part of this master thesis discusses the determination of the real-time performance of Windows NT 3.51 in order to establish if Windows NT is suitable for measurement and control tasks. The response latency, the interrupt latency, the duration of the Interrupt Service Routine (ISR) and other relevant time periods are measured. In the ISR the control action takes place. The results of this measurement is that Windows NT is found suitable for measurement and control actions if the specifications measured are suitable for an algorithm developed. The response latencies measured have a minimum value of 10 JlS and a maximum value of 56JlS 1 • The variation in the response latency is the result of disk activities. So the maximum response latency can be descreased strongly by avoiding disk activities. With the specifications measured, a developer of a control algorithm is able to establish if Windows NT is capable of running this algorithm. The algorithm can be inserted in the Measurement and Control-application (MaConapplication). This application is a framework for running and testing control algorithms on Windows NT.
68
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M.G.F. van Minderhout 25 april 1996 Recursive lndentification Algorithms for parameter estimation of induction machines prof.dr.ir. P.P.J. van den Bosch ir. R.J. Gorter
Summary Field-oriented control is a technique used for high performance control of induction machines. It requires the actual value of the electro-mechanical torque, velocity and machine parameters as essential information. The indirect field-oriented control method employs a mathematical model of the induction machine to calculate the required rotor flux from easily measurable signals. The method requires accurate knowledge of the model parameters, which are machine dependent. As the machine parameters vary with time they should be estimated on-line. The model of induction machines with input-output representations is non-linear. Parameter estimation based on non-linear models is possible, but it involves a large computation time. Parameter estimation based on non-linear models is in general not possible for on-line applications. However by transforming the measured data the use of prediction error identification methods is enabled. On-line implementation of prediction error methods leads to recursive identification algorithms. Traditional algorithms suffer from several different problems which have to be overcome. First of all the identification algorithm must be able to track parameter changes. Traditionally some kind of forgetting is introduced. By using a forgetting factor A. in the least squares criterion, old information is slowly forgotten as it is less representative for the current status of the system. This involves a second problem known as covariance wind-up or blow-up. When the input signal is not sufficiently exciting, the tradional forgetting algorithms become extremely noise sensitive. The third major problem with recursive identification is the influence of large incidental disturbances known as outliers or spikes. Hence, the algorithm must be made robust for outliers as well. In this report a recursive identification algorithm is developed, which deals with these problems. It is based on the Selective Forgetting method. In combination with a robust identification algorithms it deals with all three problems mentioned above. It is suitable for linear and non-linear regressions. Simulation results are presented which show that the algorithm has the required properties. The algorithm has been implemented on a DSP-PC system and has been tested on two first order RCnetworks. The presented measurement results show that also in practice the algorithm works fine. Finally the algorithm has been used for estimation of machine parameters.
69
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
P.G.J. v.d. Mortel 17 oktober 1996 Het balanceren van een stok op een simpel karretje. prof.dr.ir. P.P.J. van den Bosch dr.ir. A.J.W. v.d. Boom
Samenvatting U hebt het zeit vast wei eens geprobeerd. Een stok balanceren op uw hand of vinger. Hierbij heeft u waarschijnlijk een lange stok gebruikt, omdat het daarmee veel makkelijker gaat dan een korte stok. De stok die u geprobeerd heeft was waarschijnlijk ongeveer 1 meter. Om een stok van 10 em te balanceren, zou u erg veel moeten oefenen. Om een stok op een simpel karretje te kunnen balanceren met behulp van een computer moet er hardware komen om gegevens in te lezen, verwerken door de computer en nieuwe gegevens wegschrijven. We hebben daarbij gegevens nodig over de hoek van de stok en de plaats van het karretje. Gegevens over de motorspanning kunnen vervolgens berekend worden en weggeschreven worden. Allereerst zijn er enkele eisen vastgesteld. Zo is het minimaal nodig om de gegevens 8-bit te versturen. Een lagere waarde is onvoldoende om te kunnen balanceren en hogere waarde kan te duur worden. De stok zal in eerste instantie op plus of min drie graden gebalanceerd worden. In een later stadium kan gekeken worden hoe dit verbeterd kan worden. De hoeksensor is gerealiseerd met behulp van een optical switch. Dit is een infraroodzender en ontvanger in een, waar een bepaald vlaggetje doorheen loopt. Dit vlaggetje bepaalt de hoeveelheid Iicht dat de ontvanger ontvangt. Door de stok aan te sluiten op het vlaggetje is een redelijk lineair verband te creeren wat verder geoptimaliseerd kan worden door een zogenaamde look-up table te implementeren in de computer. De plaatssensor is gerealiseerd met behulp van twee reflective object sensoren. Er worden zwarte strepen op een witte achtergrond op een wiel geplaatst zodat er een faseverschil ontstaat tussen de twee sensoren. Hiermee is de plaatsinformatie op te slaan in een 8-bit waarde. De interface is gerealiseerd door een 12 C-bus interface. Met behulp van de software MACS kan zo een dataoverdracht plaatsvinden van 100 samples per seconde. MACS berekent de nieuwe stuurspanning voor de motor. De motor wordt aangestuurd door een IC van Siemens dat speciaal hiervoor bedoeld is. Er moet ook goed gekeken worden nar de constructie van het karretje. Een karretje van Bart Smit blijkt te klein en te simpel om een stok te kunnen balanceren/ Eer karretje gemaakt van Meccano heeft vele verbeteringen ten opzichte van het karretje van Bart Smit maar waarschijnlijk kan het nog beter (en goedkoper) indien zeit een karretje gemaakt wordt. De regelaar die ontworpen is regelt de stok goed. Helaas blijft de kar niet stilstaan door het verlopen van de nulinstelling en de quantisatie-ruis. Het zal nodig zijn gegevens van de motoroverdracht te verkrijgen en een betere regelaar te ontwerpen. Ook is aan te bevelen een nieuw karretje te bouwen. Verder zal de spanning over de motor verhoogd moeten worden om een snellere reactie van het karretje te krijgen.
70
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
H.l. Nerminer 12 december 1996 Digital controller design for the gradient amplifier in a MRI scanner prof.dr.ir. P.P.J. van den Bosch ir. R.J. Gorter ir. W. van Groningen (Philips Medical Systems)
Summary Within the framework of the Masters project at the Eindhoven University of Technology, Faculty of Electrical Engineering, in association with Philips Medical Systems, a feasibility study on digital controller design for the gradient amplifier in a MRI scanner is carried out. Based on MR image quality, a criterium for the digital controller is derived. The present analog gradient amplifier is analysed and modelled. On the basis of this model, the performance of the analog gradient amplifier is described. Based on the present gradient amplifier controller, different digital controller structures are designed. All of these structures do not meet the criterium, but netiher does the present analog gradient amplifier. Much attention is paid to AID and D/A converter placement and resolution, since these converters are the bottlenecks for digital controller design due to high resolution requirements. Serveral simulations are carried out for the digital controller design, resulting in confirming required converter solution. Digital controller design for the gradient amplifier is possible with common AID and D/A converters up to sixteen bits, but reproducibility requirements for the converter outputs could lead to less common AID and D/A converters.
71
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
R.A.A. van Nieuwburg 12 december 1996 Line correspondences in an image sequence recorded by a moving camera prof.dr.ir. P.P.J. van den Bosch ing. W.H.A. Hendrix M.Sc. Liu Hong
Summary This report deals with the line correspondence problem. From an image sequence recorded by a moving camera we want to put the corresponding 20 line segments together. Two 20 line segments from different images are corresponding if both line segments are a projection of the same 30 line segment in the scene which is recorded. The sets of corresponding 20 line segments are required to make the 30 reconstrction of the line segments. After the formulation of the problem five correspondence algorithms are evaluated and compared. One of them is chosen. Further, this correspondence algorithm is examined in detail, implemented and improved. Wth a limited set of image sequences the algorithm is tested. The algorithm works correct for image sequences with a smooth camera path, small interframe changes in the 20 line segments and no missing line segments. For image sequences with missing line segments in some images the result of the algorithm is limited correct. Further research to treat the missing lines is necessary. Besides, the comparison with experiments of other correspondence algorithms can give more insight into the weakness of the various correspondence algorithms.
72
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
M.M.H.J. Reinierkens 12 december 1996 Identification and control of a floating platform prof.dr.ir. P.P.J. van den Bosch dr.ir. A.A.H. Damen
Summary The floating platform is a laboratory process used for demonstration and research. It consists of a body that rests on three floats. A crane is mounted on top of the body of the platform. The platform floats in a tub with filled water. Three servos can be used to control the distance between the platform and the floats. The objective of the project is to identify a black box model of the platform. This model will then be used to design a controller. The control objective is to keep the platform horizontal irrespective of disturbances of the crane or the waves in the tub. Before the identification starts, some information on the physical processes that work on the platform is gathered. Also, some improvements are made. A fence is placed near the edge of the tub. This greatly reduces wave reflection from the edge of the tub. Sensors are placed aroud the float. They are used to measure the depth of the floats in the water Preliminary research is done which can be used in the design oof the test signal used for identification. A noise signal is used to record a test set of signals used in the identification program. The model is estimated using the IPCOS CACSD [IPC94] shell. The results of the identification process are reduced in order using the Matlab MPI toolbox. Validation of the reduced model shows that it is good enough in control system design. The controller is designed using LOR theory. Implementation was done using a Real-Time toolbox in Matlab. Test show that the final controller can level the platform ten seconds after a weight of six kilograms is placed on the platform.
73
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
C.M.J. v.d. Sande 17 oktober 1996 Inverted pendulum on a robot in two and three dimensions. prof.dr.ir. P.P.J. van den Bosch ir. C.A.M. van de Brekel
Summary We use an ASEA-Jrb6 manipulator to balance an inverted pendulum. We can balance in both XY and XYZ space. The third dimension will be used to reduce the acceleration of gravity. This acceleration will be reduced to 2.3 m/s2 • The angle acceleration of the inverted pendulum with the vertical axis will be decreased in comparison with the two dimensional system. We use a vision system in order to measure the position of the inverted pendulum in the XY plane. The z coordinate is calculated from the angles of the links of the manipulator. We use the principles of the dead beat controller to balance the inverted pendulum.
74
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M. van Santen 25 april 1996 Modelling dynamic behaviour of a yeast cell Simulation model of the TAC cycle and glyoxylate bypass prof.dr.ir. P.P.J. van den Bosch dr.ir. A.J.W. van den Boom dr. H. Kuriyama dr.ir. M.L.B. Keulers
Summary A simulation model of the Tricarboxylic acid (TCA) cycle and glyoxylate bypass has been developed. The simulation model is capable of calculating the amount of a metabolite that disappears each unit of time and the concentration of this metabolite. This simulation model could function as a tool in finding the cause of an autonomous, sustained, metabolic oscillation which is observed in a continous culture of the yeast Saccharomyces cerevisiae grown on ethanol. The simulation model consists of three different blocks; a concentration block and a flux block for every metabolite and a reaction block for every reaction of the TCA cycle and glyoxylate bypass. In a concentration block the change in concentration of a metabolite is calculated from the amount of this metabolite made each unit of time (referred to as flux;n) and the amount that disappears each unit of time (referred to as fluxout). The flux block calculates the amount of fluxout as the rate constant of the metabolite times the concentration of the metabolite. A reaction block contains the stoichiometry of a reaction. The amount of flux which enters the glyoxylate bypass is determined through a so-called flux distribution function in the corresponding reaction block. This flux distribution function incorporates the negative feedback of two metabolites. With the building blocks it is possible to construct a simulation model of a cyclic pathway with a bypass, which functions in a predictable way. It has been tested whether this simulation model can be fitted to experimental data. The experimental data which was available was an oscillating ethanol flux going into the cell, an oscillating acetate concentration and an oscillating C02 evolution rate. A number of simulations have been done to test whether the simulation model could reproduce the amplitude and the phase shift of the experimental data. The amplitude of the simulated acetate concentration could not be simulated. This could indicate that this phase shift is a result of the ethanol flux going into the cell. Adjusting the amplitude of the C02 evolution rate is more difficult, since it depends not only on several fluxes, but also on the flux distribution functions of the glyoxylate bypass and the ethanol input. To produce a correct simulation of the C02 evolution rate more experimental data is needed.
75
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
M.G.J. Tenthof 29 augustus 1996 Modified error diffusion on Oce colour copying and printing prof.dr.ir. P.P.J. van den Bosch ir. N.G.M. Kouwenberg
Summary In order to reproduce high quality full colour documents on for instance digital copiers the original document is subject to many transformations and conversions. An important tion is the encoding of the original continuous tone information to binary information, reproduction on most marking devices. In this report several of these so called 'halftone are examined.
or printers, transformaneeded for techniques'
First of all an overview of the most commonly used halftone techniques is given. It is concluded that in order to obtain optimum image quality a halftone algorithm should produce printable, i.e. good reproducible, irregular structures. Since error diffusion produces very fine, high frequency patterns it is taken as a starting-point for further investigations. It is shown that application on pure error diffusion on the (colour) print engine does not result in the optimum image quality this technique presumes. Although edges and detail information are rendered very well, application of pure error diffusion results in a visually disturbing noisy impression when smooth of slowly varying image information needs to be rendered. In order to improve image quality several modifications to the error diffusion algorithm are examined. By modulation the error diffusion threshold with an output dependent feedback term the output patterns are slightly coarsened in order to obtain better reproducible output images. Although the edge rendering properties of error diffusion were maintained, reliable and artifact free colour mixing showed to be impossible and does application of this technique introduce new visually disturbing artifacts which can not reliably be controlled because of the instability of the algorithm. When the error diffusion threshold is modulated with a dither matrix however, sharp and good reproducible output image as well as reliable colour mixing is obtained. By adjusting the amount of threshold modulation with an edge operator it is possible to render edge and detail information sharp and smooth or slowly varying information stable.
76
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
M.L. Thijssen 29 augustus 1996 A 2-dimensional magnetic levitation system prof.dr.ir. P.P.J. van den Bosch ir. P. Houtkamp
Summary A preliminary study of a two axis mirror system with magnetic bearings is the design of a two dimensional magnetic levitation system. This system consists of four electromagnets and a steel ball. The magnetics can attract the ball in four directions of a two dimensional area. The reluctance force of the magnets can be controlled by the current through the coil. To levitate the ball between the magnets an accurate position sensor and current amplifier is necessary. The position sensor is based on a measurement of the inductance of the coil which is dependent of the ball position. The advantage of this method is that the coil can be used as sensor and actuator. The realized sensor signal is disturbed by saturation effects of the core, the mutual inductance between the magnets and noise of the current amplifier. The realized current amplifier is very accurate. But the large inductive load (coil} causes some noise which disturbes the sensor. It appeared from simulations that this system can be controlled with two control systems: a feedforward controller or a controller based on exact linearization. These controllers are not tested in practise because of the sensor problems. This work has been carried out at the Measurement and Control group of the Department of Electical Engineering at the Eindhoven University of Technology in the period December '95 August '96.
77
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
E. P. Wittrock 17 oktober 1996 Sensor Fusie prof.dr.ir. P.P.J. van den Bosch dr. S. Weiland ir. R. Kunst
Samenvatting Sensor fusie wordt gebruikt om verscheidene sensor metingen met verschillende eigenschappen te combineren tot een resultaat dat niet kan worden bereikt met een enkele sensor. Doelstelling van dit rapport is methoden te vinden, en met elkaar te vergelijken, om verschillende sensor signalen te fuseren. Toepassingen voor deze vakgroep zijn er voor een tweetal projecten. Het eerste project is de bepaling van een dekstand van een schip en bij het tweede project is het van belang de positie van een bus op een busbaan zeer nauwkeurig te bepalen. Dit rapport beschrijft Kalman filter algoritmen, in de discrete tijd, gebruikt om de sensor metingen te fuseren. Verscheidene methoden zijn voorgesteld en gedefinieerd. De op covariantie gebaseerde algoritmen worden gebruikt om twee of meerdere verschillende sensor metingen te fuseren tot een gezamenlijk resultaat. De verscheidene fusie algoritmen zijn ge"implementeerd in Matlab files en de prestaties van de algoritmen zijn geevalueerd met behulp van computer simulaties met een 1dimensionaal dynamisch model met een versnelling die inwerkt op de snelheid en de positie. De simulaties Iaten zien dat de algoritmen hetzelfde resultaat geven met verscheidene sensor configuraties en sensor varianties. Oak is er gekeken naar het effect van bias op de verscheidene fusie algoritmen. Er wordt een voorstel gedaan om het sensor fusie probleem op te lassen met een H00 filter, op een equivalente manier als het Kalman filter, waardoor we de sensor eigenschappen in het frequentie bereik meenemen in de weegfuncties.
78
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
W.P.M. Houwers 15 februari 1996 Local-feature-based recognition of partially-occluded objects using a neural network classifier prof.dr.ir. P. Eykhoff prof.dr.ir. P. Eykhoff
Summary Neural networks are widely used in the area of machine perception and, in particular, in the area of object recognition. This thesis describes a new method of recognizing partially occluded objects with the use of an artificial multilayer neural network (perceptron). In the low-level vision system a boundary description of the object is given in line- and arc-segments. From this description, a set of local features is formed and these local features are learned by the neural network. In the recognition stage the neural network uses these local features to recognize different objects in the scene. Although the learning stage of the neural network is computionally time-consuming, the recognition is carried out on-line. In order to prevent the neural network from getting stuck at a local minimum of the error surface and to reduce training time in the learning stage of the network, an improved learning algorithm, which uses a momentum term, is proposed. This method is derived from the well-known error-back-propagation training algorithm. Furthermore, in order to find the optimal network topology a dynamic-node-creation method is proposed. The neural network has been applied in practice and the results of this network are encouraging. The experimental results show that the local-feature based system outperforms the existing 2-D system, provided that in the learning stage the training algorithm succeeds to converge.
79
LEERSTOEL MEDISCHE ELEKTROTECHNIEK
81
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
R.H.A.M. Meijers 15 februari 1996 A closed loop arterial pressure controller and an Infusion Toolbox for anaesthesia; integration and clinical evaluation prof.dr.ir. J.E.W. Beneken dr.ir. J.A. Blom
Summary During surgery it is the task of the aneasthetist to bring the patient into an optimal condition which includes tasks like creating a state of unconsciousness and suppressing pain sensation and muscle reactions. Therefore various drugs and devices need to be controlled and monitored. Computers can be of great help to support the aneasthetist through retrieving information, giving a neat overview of the data at a central location and automation of certain tasks. The final goal would be a central terminal to control and monitor all functions and devices for aneasthesia. In this report a closed loop blood pressure controller is described which is based on a simple and robust PI-controller and a supervising expert system. Adaptive control is necessary because the sensitivity of the patient's to the used drug, i.e. sodium nitroprusside (SNP), varies over a wide range. The drug SNP decreases the mean arterial pressure (MAP) through dilation of the smaller arteries. 33 clinical tests during cardiac surgery have been performed and the evaluation, which was one of the objectives of this research, shows good and safe performance. The controller was on average in automatic mode for more than 90% of the time of an operation and the performance during effective control showed the MAP to be within a distance of 10 mmHg to the setpoint for 89% of the time. The average distance to the setpoint during effective control was 4.5 mmHg. Automatic delivery of various other drugs was during these tests performed using an Infusion Toolbox. This Infusion Toolbox is a complete guiding system to assist the aneasthetist to deliver simultaneously many drugs through computer controlled infusion and is also described in the report. The second objective of this research was to integrate the blood pressure controller and the Toolbox. The created design has a modular structure and is based on a client-server model. The communications between both applications is accomplished through usage of the serial network of the Toolbox which is build around a universal device communication driver (UDCC) and a bedside communication controller (BCC) staying in the PC. This network provides a kind of device communication controller (DCC) for the various commercially available infusion devices which can be controlled. The communication network and protocol were implemented and a first prototype of the controller with a new user-interface in the Infusion Toolbox application is build. This prototype now has to be tested extensively through simulations before it can be used in a clinical environment. The resulting system will then be another step in the direction of a machine for total intravenous anaesthesia (TIVA).
82
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.W. Risseeuw 29 augustus 1996 A portable respiratory rate recorder for athletes prof.dr.ir. J.E.W. Beneken ir. W.H. Leliveld H.J.M. Ossevoort
Summary This report describes the development of a portable respiratory rate recorder (PARR) for athletes. Respiratory rate is an important parameter, because it can be used to detect the anaerobic threshold. The PARR can be used to measure the respiratory rate of athletes during exercise in the field, whereas existing devices can only be used in a laboratory. The measured respiratory rate is shown on a liquid crystal display and stored in memory. When the measurement is completed, the PARR can be connected to a personal computer, to load the results for further analysis. The first step in the project was the formulation of the requirements for such a device. Also the user interface was specified. During the project a new sensor for measuring the respiratory signal of an athlete was developed. This consists of a pressure sensor mounted in a flexible belt around the chest of the athlete. The hardware of the PARR consists of a Philips microcontroller with a built-in AID converter with some supplementary circuits. A custom active filter/amplifier has been designed to convert the signal from the sensor into a signal which is suitable for the AID converter. An earlier developed frequency analysis method is used to determine the respiratory rate from the measured signal. This function and the other software is written in the 'C' language. In a final evaluation under laboratory conditions, the developed portable respiratory rate recorder has been tested with four subjects. The values measured during the exercise test were compared with the values measured with a professional measurement system. This comparison shows that the portable respiratory rate recorder gives reliable values for the respiratory rate of an athlete.
83
- - - - - - - -
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerdocent: Begeleiding:
- -
H.W.M. de Bruyn 12 december 1996 Using ptolemy as a fast-prototyping environment for morphological filters dr.ir. P.J.M. Cluitmans dr.ir. J.A. Blom
Summary In this report, the possibilities of using Ptolemy as a fast-prototyping platform is studied. Ptolemy is a design frame-work with several models of computation, capable of simulating and generating code. As an example, morphological filters are implemented in a simulating domain, and in a domain that produces assembly code for a digital signal processor {DSP). When an easy switch of a design between domains is demanded, the modules in the different domains with the same functionality must be synchronized. The synchronization must be applied to naming of the defining files, number and names of the in- and outputs, and to the name and type of certain variables used in the modules. To create the same functionality in different domains, some services offered in of one domain, must be created by the user in another domain. For example, the simulation domain creates a history buffer for every input that is declared, while such a buffer must be build from scratch in the domain generating code for the DSP. When the conditions mentioned above are met, Ptolemy can be a powerful tool to design and test an application in one domain, and translate it fast to another domain.
84
_j
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerdocent: Begeleiding:
P.A. de Clercq 17 oktober 1996 Implementing a critiquing system to provide decision support in the ICU: the CritiCIS system dr.ir. P.J.M. Cluitmans dr.ir. J.A. Blom
Summary Due to the development of new technology for diagnostic and therapeutic purposes, combined with the introduction of microprocessor technology, the amount of data collection in Intensive Care Units (ICUs) has increased enormously. In order to collect, store and manage this flow of information, Patient Data Management Systems (PDM-Systems) were introduced. An example of such a system is the Intensive Care Information System (ICIS). ICIS is a PDMS, designed to process data at the ICU of the Catharina hospital, situated in Eindhoven, the Netherlands. To improve the quality of the data, stored in ICIS, a critiquing system was implemented to provide decision support to the users of ICIS. This system, called CritiCIS, accepts a medical protocol as well as data from the ICIS database and verifies 1) the consistency of the database itself and 2) the consistency of certain treatments (e.g., it is recommended not to administer penicillin to a patient that is allergic to penicillin). CritiCIS is implemented using the SIMPLEXYS toolbox, a set of tools to design real time expert systems and Borland Delphi, a Rapid Application Development (RAD) tool. Although implemented as a separate application, CritiCIS acts as an integrated part of ICIS: the user selects a patient in ICIS, after which ICIS activates the critiquing system. The selected patient's data is then gathered and processed by CritiCIS and possible inconsistencies are shown as warnings to the user. The system is currently in the development phase and is now tested by the medical staff. The first results are promising: as a result of the integrated user interface of CritiCIS as well as the utilized critiquing approach, the users of ICIS are satisfied with the critiquing system. Therefore, the conclusion is drawn that it is possible to successively implement a critiquing system that provides decision support to the medical staff of the above-mentioned ICU. This paper describes the development of the CritiCIS system. It provides information about the problem domain, ICIS and the SIMPLEXYS toolbox as well as the internal functioning of CritiCIS, its means of communication with (the users of) ICIS and the knowledge elicitation process.
85
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerdocent: Begeleiding:
H. Kuster 12 december 1996 Sprekende keukenweegschaal voor visueel gehandicapten dr.ir. P.J.M. Cluitmans ir. W.H. Leliveld H.J.M. Ossevoort
Samenvatting Dit verslag behandelt de resultaten van een afstudeerproject dat is uitgevoerd in de afgelopen acht maanden aan de Technische Universiteit te Eindhoven, Faculteit Elektrotechniek, vakgroep Meeten Besturingssystemen, sectie Medische Elektrotechniek. Het onderwerp van dit afstudeerproject is het ontwerp van een sprekende keukenweegschaal voor visueel gehandicapten. Als eerste stap is een klein markt onderzoek uitgevoerd om uit te vinden welke digitale keukenweegschaal het best gebruikt zou kunnen worden als basis voor het project. Nadat de keuze was gevallen op de LSSOOO van Ohaus is de hardware ontworpen voor de spraakinterface. Deze interface is gebaseerd op een 80CL580 microcontroller in samenwerking met een ISD2560 spraak-synthesizer. Als laatste stap in het ontwerpproces zijn de microcontroller en de spraak synthesizer geprogrammeerd zodat de totale spraak interface de volgende functies kan uitvoeren. Het gewicht kan op twee verschillende manieren worden uitgesproken. Als het gewicht stabiel is wordt het volledig uitgesproken, maar als het gewicht verandert dan wordt het geextrapoleerd en verkort uitgesproken. Volumeregeling. Het volume van de uitspraak kan worden geregeld met een drukknop. Alarm functie. De spraak-interface kan worden gebruikt als een eenvoudige alarm klok, die ook bediend wordt met een drukknop. Spraakherhaalfunctie. De laatste uitgesproken mededeling kan worden herhaald door op de spraakherhaaltoets te drukken. Het gebouwde prototype is op beperkte schaal getest en daaruit bleek dat er nog een paar kleine verbeteringen mogelijk waren. Tevens dient de ondersteuning voor meerdere talen nog te worden geprogrammeerd. Als deze punten zijn gerealiseerd dan kan de spraak-interface op beperkte schaal worden geproduceerd zodat visueel gehandicapten ook daadwerkelijk gebruik kunnen maken van de sprekende weegschaal.
86
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerdocent: Begeleiding:
N.P. de Regt 17 oktober 1996 Sprekende kamerthermostaat, een onderzoeksproject ten behoeve van visueel gehandicapten en ouderen dr.ir. P.J.M. Cluitmans ir. W.H. Leliveld H.J.M. Ossevoort
Samenvatting Dit afstudeerverslag behandelt de realisering van een sprekende kamerthermostaat ten behoeve van visueel gehandicapten en ouderen van wie het gezichtsvermogen is verslechterd. De sprekende kamerthermostaat is gebaseerd op een in produktie zijnde, commercieel verkrijgbare kamerthermostaat. Met behulp van een interface wordt de display-informatie van deze kamerthermostaat omgezet in spraak. Voor de realisatie van het project is eerst onderzoek verricht naar verschillende typen kamerthermostaten. Er is gekozen voor een elektronische kamerthermostaat in verband met mogelijkheid voor het inpassen van digitale elektronica. Als fabrikant is gekozen voor Honeywell met de kamerthermostaat Chronotherm Ill. Deze kamerthermostaat heeft een eenvoudige en overzichtelijk bedieningspaneel, hetgeen vooral belangrijk is voor oudere mensen. Tevens biedt de Honeywell Chronotherm Ill extra comfort doordat deze gebruik maakt van een optimalisatie-regeling. Daarna is onderzoek gedaan naar verschillende methoden voor het verkrijgen van de gegevens van het display. Gekozen is voor het decoderen van de displaylijnen, omdat andere methoden niet realiseerbaar zijn en/of niet aile relevante informatie leveren. Bij de spraakinterface, waarbij de displaylijnen worden gedecodeerd, worden componenten toegepast waarbij gelet is op de voedingsspanning, het stroomverbruik en de verkrijgbaarheid van een SMD uitvoering. De uiteindelijke spraakinterface decodeert de displaylijnen die, via een decodeerschakeling en multiplexers, worden aangeboden aan een microcontroller. Een spraak-IC wordt rechtstreeks aangestuurd door de microcontroller. De spraakinterface bestaat uit twee printplaten. De hardware voor het decoderen van de displaylijnen bevindt zich, evenals de hardware rondom de microcontroller, op een aparte printplaat. De spraak-interface maakt gebruik van twee drukknoppen, respectievelijk de drukknop "Display informatie" en de drukknop "lnstellings informatie". De drukknop "Display informatie" wordt ingedrukt om te weten welke temperatuur, tijd, etc. zijn weergegeven op het display. Met de drukknop "lnstellings informatie" worden wijzigingen tijdens het instellen uitgesproken. Het instellen vindt plaats met behulp van het toetsenbord van de kamerthermostaat. Bij een eerste evaluatie is gebleken dat de kamerthermostaat Chronotherm Ill van de fabrikant Honeywell door middel van kleine aanpassingen van de toetsen, geschikt is voor visueel gehandicapten. De spraakunit met druktoetsen en luidspreker is ondergebracht in een aparte behuizing naast of onder de kamerthermostaat Chronotherm Ill. Suggesties voor mogelijke verbeteringen besluiten dit verslag.
87
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerdocent: Begeleiding:
M.J. Sinke 29 augustus 1996 Review and implementation of current brightness models dr.ir. P.J.M. Cluitmans dr.ir. J.B. Martens
Summary This report documents the search for a brightness model that is able to describe brightness phenomena in complex images. A number of brightness models is discussed. The first class is that of brightness integration models. The retinex model is discussed and the problem of non-zero curl-that leads to inconsistencies within this class of models - is treated. Next, feature detector models are discussed. These models incorporate a multi-scale structure. The models discussed are the local energy model and the MIRAGE and MIDAAS models. Then the class of object-oriented models is considered. ·The original object-oriented brightness model as well as a more recent modification to that model are discussed. The last model discussed is the neural network model. Of these models, the object-oriented models are implemented. Brightness integration models are shown to be too simple for complex stimuli, while the MIDAAS and MIRAGE algorithms incorporate one-dimensional interpretation stages that are not easily generalizable to two dimensions. The neural network model ~oes not help in understanding the process that leads to a brightness impression. It is, however, two-dimensional, and is a candidate for future implementation. The two implemented models and a number of concepts emerging from all the models considered, are used to compute the response to four complex stimuli. It is shown that the first model fails to accurately predict the brightness impressions obtained in experiments. The second model comes closer, but needs careful tuning before it may be able to predict the results with some accuracy. This is a problem that should be addressed in a future project. Finally, the conclusion is drawn, that the failure of all but one of the models is not easily explained. It is stated that this may be due to the fact the quentity 'brightness' cannot easily be defined. Also, it is observed that brightness may not even be an internal representation of the visual system, but one derived from other descriptions. This observation is motivated by the structure of some of more current models. Another direction for further research might be to investigate this statement.
88
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerdocent: Begeleiding:
B. Souabi 17 oktober 1996 Een uitbreiding op de EDF standaard t.b.v event related potential (ERP) onderzoek dr.ir. P.J.M. Cluitmans ir. M. van de Velde ir. M.M.C. van den Berg-Lenssen (KUB) dr. G.J.M. van Boxtel (KUB)
Samenvatting De sectie Psychonomie aan de Katholieke Universiteit Brabant werkt al enige tijd samen met de sectie Medische Elektrotechniek aan de Technische Universiteit Eindhoven. De sectie Psychonomie gaat de bewerking van experimentele data op een nieuw platform uitvoeren. Deze wisseling van platformen met verschillende besturings-systemen brengt de nodige problemen met zich mee. De software moet opnieuw ontwikkeld worden voor het nieuwe besturings-systeem. Dit afstuderen bestaat uit het ontwikkelen van de software op het nieuwe platform. Naast een nieuw besturings-systeem heeft men ook gekozen voor een ander data formaat (European Data Format). Het nieuwe data formaat is echter niet geschikt voor opslag van de (event gerelateerde) data zoals deze op de KUB gebruikt wordt. In samenwerking met de TUE is hier een extensie voor ontworpen, die het mogelijk maakt de data (met event-informatie) op te slaan in de nieuwe standaard, zonder deze fundamenteel te veranderen. Na de keuze van een nieuw besturings-systeem en een uitgewerkt protocol om de data op te slaan is de volgende stap in het ontwikkel-project het ontwerpen van een conversie programma dat de oude experimentele data, die nog in het oude (Fysian) formaat is opgeslagen om kan zetten in het nieuwe 'extended' EDF. Hiermee kan in de toekomst de oude data ook op het nieuwe platform verwerkt worden. De laatste stap in mijn afstuderen is het ontwikkelen van een software bibliotheek die door toekomstige programmeurs gebruikt gaat worden bij het ontwikkelen van bewerkings-software voor EDF data. Deze bibliotheek bevat een aantal functies die voornamelijk de input en output routines van/naar EDF data files voor derden vereenvoudigt. Een programeur hoeft zich niet te verdiepen in de 'echte' structuur van een EDF data file, maar kan met een eenvoudige functie aanroep de data uit een data file lezen of er naar toe schrijven. Bij het ontwikkelen van een bibliotheek zijn eenvoud en structuur zeer belangrijk, daar anderen met deze software moeten werken en moeten kunnen begrijpen. De geprogrammeerde software is met verschillende methoden getest. Het conversie programma is getest door de geconverteerde data met behulp van software van derden te bewerken en deze bewerkingen te vergelijken. Ook is software van derden gebruikt om de geconverteerde data te bekijken. Daar het nieuwe formaat niet mag afwijken van de standaard mag deze software daar geen probleem mee hebben. Enkele routines uit de bibliotheek zijn al in gebruik in software van anderen, zowel op de TUE als op de KUB. De basis is gelegd. Nu moet de bewerkingssoftware geschreven worden die op het oude besturings-systeem al voorhanden is, zodat men aan de sectie Psychonomie kan overstappen naar het nieuwe systeem. De bibliotheek kan nog uitgebreid worden met andere functies die 'low-level' bewerkingen op de data van de programmeur afnemen, voornamelijk voor het bewerken van het Event en het Info kanaal.
89
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerdocent: Begeleiding:
J.P. van de Ven 25 april 1996 Bloeddrukregeling m.b.v. Hoo- en J.L-synthese dr.ir. P.J.M. Cluitmans dr.ir. J.A. Blom dr.ir. A.A.H. Damen
Samenvatting In navolging op de vele onderzoeken die al plaats hebben gevonden om een robuuste bloeddrukregelaar te ontwerpen, behandelt het onderzoek, dat door dit verslag beschreven wordt, de synthese van een regelaar m.b.v. Hoo- en J.L-synthese. Op dit gebied is, zover mij bekend, nag geen onderzoek gedaan naar bloeddrukregelaars. Het verlagen van de bloeddruk vindt plaats door het met een infuus inbrengen van het medicijn Natrium Nitroprusside. Dit middel zorgt ervoor dat de diameter van de arteriolen grater wordt. Hierdoor wordt de bloeddruk verlaagd. Bij het antwerp van de regelaar is uitgegaan van een vereenvoudigd model van de bloeddrukdaling als functie van de infusiesnelheid. Het model is een eerste orde proces vermenigvuldigd met een gevoeligheidsfactor en een vertraging. De tijdconstante, de gevoeligheidsfactor en de vertraging zijn variabele parameters, en vooral de gevoeligheidsfactor kan varieren met een factor 36. Het onderzoek is globaal in drie stappen verdeeld: Hoo-synthese voor het proces zonder vertraging; dit leverde geen regelaar op met robuuste performance. J.L-synthese voor het proces zonder vertraging; dit leverde wei een regelaar op met robuuste performance. J.L-synthese voor het proces met vertraging; wanneer de eisen verslapt worden dan Ievert dit een regelaar op met robuuste performance. In de derde stap is dus een regelaar ontworpen die robuuste performance geeft. Bij patienten met een lage gevoeligheid voor het medicijn blijkt de regelaar de bloeddruk te langzaam naar zijn eindwaarde te sturen. Vervolgens is gekeken of men met twee regelaars wei tot het gewenste resultaat komt. Dit blijkt het geval te zijn. Het nut van een J.L-regelaar is dan echter verloren gegaan. Er is al een bestaand systeem ontwikkeld door dr. ir. J.A. Blom en staat beschreven in zijn proefschrift ([1]). Dit systeem is een expert systeem met een vijftal PID-regelaars. Elke regelaar is voor een bepaald gevoeligheidsbereik. Het omschakelen tussen deze regelaars gebeurt onder supervisie van het expertsysteem. Wanneer er meerdere J.L-regelaars gebruikt moeten worden dan zal het omschakelen op soortgelijke manier moeten gebeuren. In feite worden dan de PID-regelaars vervangen door J.L-regelaars. De J.L-regelaars zijn echter van hogere orde {1 Oe orde), terwijl er geen andere reden is om de PID-regelaars te vervangen dan dat het aantal regelaars minder wordt. De conclusie is derhalve dat J.L-synthese geen verbetering geeft op een al bestaand systeem. De mogelijkheid bestaat dat een iets betere J.L-regelaar te ontwerpen is, wanneer de benadering van de vertraging beschreven in appendix A gebruikt wordt. Deze benadering kan helaas nag niet gebruikt worden omdat een bepaalde blokstructuur, die de perturbaties beschrijft, nag niet beschikbaar is in de J.L-synthese toolbox van Matlab. Als dat in de toekomst wei het geval is, dan moet die mogelijkheid nag onderzocht worden.
90
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
A.L. Hoffmann 17 oktober 1996 EXPLORE: een methode voor het inductief leren van optimale diagnostische beslisregels prof.dr.ir. A. Hasman dr.ir. J.A. Kors (EUR)
Samenvatting Uit de patroonherkenning en de kunstmatige intelligentie zijn technieken bekend om op automatische wijze, aan de hand van voorbeeldobjecten een classificator te construeren. Voor toepassing binnen de geneeskunde is het van belang dat de classificator "begrijpelijk" is, dat wil zeggen dat artsen de werking ervan intui"tief begrijpen en deze aansluit bij de wijze waarop zij gewend zijn te redeneren. Beslisbomen en beslisregels voldoen aan deze eis, waardoor het brede veld aan technieken beperkt wordt tot technieken voor het inductief leren van beslisbomen of beslisregels. Vrijwel aile inductietechnieken zijn echter behept met twee nadelen. Het eerste nadeel is dat vrijwel altijd getracht wordt de classificatienauwkeurigheid te maximaliseren, terwijl diagnostische prestatiematen als sensitiviteit en specifiteit veelal relevanter zijn. Het tweede nadeel is dat de optimaliteit van de resulterende classificator niet gegarandeerd is. Doel van dit onderzoek is een methode te ontwikkelen waarmee het mogelijk is een classificator te construeren die optimaal is onder voorwaarden die aan de diagnostische prestatiematen van de resulterende classificator zijn opgelegd. Daartoe is een nieuwe techniek ontwikkeld waarmee het mogelijk is door op uitputtende wijze als mogelijke beslisregels te genereren de enkele beste regel te induceren die een van de diagnostische prestatiematen maximaliseert onder oplegging van restricties aan de andere maten. Door de enorme rekenkundige complexiteit hiervan, blijkt het in de praktijk vaak nodig heuristieken te gebruiken. Er is een aantal eenvoudige heuristieken voorgesteld en ge"implementeerd. Eveneens is aangegeven hoe deze heuristieken op de complexiteit ingrijpen. Het nieuwe algoritme, EXPLORE (Sxhaustive Procedure for ,b.ogic-Rule Extraction) genaamd, is toegepast op een aantal eerder in de literatuur geanalyseerde gegevensbestanden. Uit experimentele resultaten blijkt dat de heuristieken de complexiteit sterk reduceren, terwijl het algoritme er nog steeds in slaagt een optimale of sub-optimale oplossing te vinden. Vergeleken met andere, meer conventionele industrietechnieken blijkt dat, wanneer als optimaliseringscriterium maximalisatie van de classificatienauwkeurigheid wordt gebruikt, er vergelijkbare of betere classificatoren worden geconstrueerd. In de huidige implementatie van EXPLORE is het mogelijk al naar gelang de complexiteit van de classificatietaak uitputtend te zoeken of een of meer heuristieken te gebruiken. De experimentele resultaten verschaffen aanwijzingen dat meer gecompliceerde heuristieken die gebaseerd zijn op een beam search strategie of een meerstapsanalyse, het inductieproces aanzienlijk kunnen versnellen. Alhoewel de praktische bruikbaarheid van EXPLORE momenteel beperkt is tot het induceren van korte regels, is zij veelal in het voordeel wanneer een goede oplossing in de vorm van een korte regel bestaat. Aangetoond is dat, in tegenstelling tot de meeste andere inductietechnieken, het met de nieuwe methode mogelijk is de diagnostische prestaties van de resulterende classificator aan te passen aan de specifieke voorwaarden die de gebruiker oplegt. Zodoende is een methode gecreeerd om toepassings-specifieke beslisregels te construeren die optimaal zijn.
91
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
I. Guelen 17 oktober 1996 Ontwikkeling van een digitaal bestuurd instrument voor de drukregeling in een bovenarmmanchet prof.ir. K.H. Wesseling dr. J. van Goudoever ing. B. de Wit
Samenvatting Dit verslag beschrijft het ontwerp van een instrument waarmee inflatie en deflatie van een bovenarmmanchet en detectie van return-to-flow automatisch plaatsvindt. Het apparaat is te bedienen met een PC die voorzien is van een real-time interface. Het eisenpakket voor het instrument werd enerzijds bepaald door TNO en anderzijds door de British Hypertension Society. In het kort zijn de belangrijkste eisen: inflatie binnen 5 seconden naar maximaal 330 mmHg, lineaire deflatie met een instelbare snelheid van 2 tot 10 mmHg/s en de mogelijkheid om de manchetdruk constant te houden. Dat het systeem moet werken met' velerlei armmanchetten die onderling verschillen in fabricage en maat vormt e~.IJ grote complicatie op zich. Daarnaast dient rekening gehouden te worden met de grote variabiliteit in menselijke bovenarmen. Een serie metingen onderbouwt de theorie dat de voor het instrument bedoelde pneumatische componenten niet-lineaire eigenschappen hebben. Om een lineaire deflatie te realiseren, is dan ook een regelsysteem ontworpen waarbij een digitale regelaar en een elektrisch aanstuurbaar proportioneel regelventiel gebruikt worden. Met MATLAB zijn simulaties van dit systeem gemaakt, waarbij gebruik wordt gemaakt van gemeten gedrag van componenten. Met het model werd een Plregelaar voor lineaire deflatie ontworpen. Met dit systeem is ook inflatie mogelijk, waarvoor een P-regelaar ontworpen is. Het constant houden van de manchetdruk gebeurt met dezelfde Pl-regelaar als voor lineaire deflatie gebruikt wordt. Om een snelle inflatie te realiseren wordt een buffervat gebruikt. Omdat inflatie via het proportionele regelventiel te lang duurt, wordt een extra klep (open/gesloten) aangebracht tussen het buffervat en de manchet. Op deze manier is inflatie inderdaad ruim binnen 5 seconden mogelijk. Voor snelle deflatie wordt een klep (open/gesloten) tussen de manchet en de buitenlucht gebruikt. Als deze klep ook tijdens het laatste gedeelte van lineaire deflatie geopend wordt, is lineaire deflatie mogelijk totdat de manchetdruk lager is dan 30 mmHg. Hiermee wordt dus aan de eisen voldaan. De besturing van het instrument is ge"integreerd in de software, die ontwikkeld is in de vorm van een bibliotheek. Hieruit kunnen entiteiten worden aangeroepen voor de besturing en de userinterface van het instrument. Bibliotheek en user-interface worden beschreven. Uit metingen van "worstcase"-systeemgedrag volgt de conclusie dat een systeem ontworpen is dat aan aile eisen voldoet.
92
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
L.M.W.M. Passier 12 december 1996 Ontwerp van een micro- en motioncontroller gestuurd regelsysteem voor een beademingsapparaat prof.ir. K.H. Wesseling dr. J.R.C. Jansen (Erasmus Universiteit Rotterdam)
Samenvatting In het kader van mijn opleiding tot elektrotechnisch ingenieur aan de Technische Universiteit Eindhoven voerde ik mijn afstudeeronderzoek uit aan de Erasmus Universiteit Rotterdam, instituut Longziekten, afdeling Pathofysiologisch Laboratorium. Het afstudeeronderzoek behelsde het onderzoek naar de mogelijkheden voor de ontwikkeling van een nieuw regelsysteem voor een bestaand beademingsapparaat. Het beademingsapparaat wordt gebruikt ten behoeve van dier-experimenteel onderzoek op de betreffende afdeling. Het apparaat is een mechatronisch systeem en bestaat uit een mechanisch systeem en een besturingssysteem. Het mechanische systeem is een positioneersysteem, dat bestaat uit een motorsysteem, een balg en drie gaskleppen. Het besturingssysteem is een servosysteem, dat wordt bestuurd door een conventionele computer (PC). Met dit beademingsapparaat kan een reeks willekeurige ventilatiepatronen uitgevoerd worden. Door het beschikbaar komen van nieuwe technologische ontwikkelingen kunnen zowel de kwaliteit als de veiligheid van het apparaat sterk verbeterd worden. Doordat er een verschil bestond tussen de gewenste en het werkelijke gedrag van het beademingsapparaat, werden diens mechanische eigenschappen onderzocht en werd een model van het beademingsapparaat opgesteld. Met het model zijn eventuele modificaties in het beademingsapparaat te analyseren, nag voordat deze worden aangebracht. Met behulp van het model werden regeltechnische specificaties omtrent het systeem bepaald en werden responsies geanalyseerd door middel van simulaties met de "toolbox Simulink" van het programmatuurpakket Matlab. Een modulair micro- en motioncontroller gestuurd regelsysteem werd ontworpen om (i) het snelheidspatroon van het motorsysteem (ii) de besturing van de gaskleppen en (iii) de bewaking van het mechaniek te verbeteren. Het motorsysteem wordt gecontroleerd door een motioncontroller, die de vereiste opdrachten ontvangt van een microcontroller. Deze microcontroller bestuurt tevens de gaskleppen en bewaakt zowel het mechaniek als het ventilatiepatroon dat door het positioneersysteem wordt uitgevoerd. Door deze taken uit te besteden aan een microcontroller, wordt niet aileen de PC hiervan ontlast, maar wordt tevens het totale regelcircuit in het beademingsapparaat ondergebracht. Het positioneersysteem is hierdoor gemodificeerd in e~n gesloten (teruggekoppeld) regelproces, waarin de PC gegevens verandert. Het ontworpen regelsysteem minimaliseert actief verschillen tussen de actuele en de opgedragen positie door middel van een algoritme. De uitvoering en implementatie van een algoritme is toegelicht. Besturing van het beademingsapparaat door de PC is mogelijk middels een seriele interface. Door het gebruik van een seriele interface wordt de compatibiliteit van het beademingsapparaat vergroot zonder dat een ingrijpende modificatie vereist is van de bestaande complexe programmatuur. Meerdere beademingsapparaten kunnen op deze wijze door een PC bestuurd worden. Bovendien kan voor eenvoudige ventilatiepatronen een microcontroller met seriele interface gebruikt worden als opdrachten-generator, in plaats van de PC. De gegevenstransmissie tussen de PC en het beademingsapparaat gebeurt volgens een half-duplex protocol. ·
93
LEERSTOEL ELEKTROMECHANICA & VERMOGENSELEKTRONICA
95
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
H.J. Boswinkel Rapport nr. EMV 96\12 17 oktober 1996 Randomized modulation schemes for discharge lamps prof.ir. J. Rozenboom dr. D. Antic
Summary During the past 10 years a lot of effort has been put into the development of a HF electronic ballast for HID-lamps. Such an electronic HF-ballast has the following advantages over a conventional magnetic ballast: 1. Less weight, 2. Easy to regulate lamp power, 3. Lower costprice. The problem that arises when developing the HF-ballast for HID-lamps is the occurence of lamp instabilities due to acoustic resonances. A HID-lamp has resonance frequencies which are determined bij the geometry of the dischargevessel, lamp-filling and temperature of the discharge process. Between the resonance frequencies are smaller or wider resonant-free zones where the tuned HF-operation is possible. The place and the width of these resonance-free zones vary with manufacturing tolerances and lamp age. This makes the tuned HF-operation difficult. The ultimate goal is to develop a HF-ballast which causes no acoustic resonances in HID-lamps regardless of the manufacturer of the lamp, age of the lamp or manufacturing tolerances. This HF-ballast should be realised with today's technology and without expensive components. Spreading lamp power spectrum has proven to be a successful way to prevent acoustic resonances. By doing it in such a way, frequency components of the lamp power will become lower than the resonance threshold. This was implemented by frequency modulation of the lamp current. In this report the effect of frequency modulation of the lamp current with different modulating patterns has been investigated with the focus on using white noise. In recently published results the author claims that it is possible to concentrate a large amount of power in a small bandwidth by using white noise. This allows the use of high Q resonant inverters. Initial measurements have been done with a measurement arrangement in which a HF-current is superposed on 200 Hz square-wave current to determine the positions of the resonances. During these measurements it was found that the resulting HF-spectrum was not as perfect as it should be. A proposal for improving the measurement-unit is given in this report. After these measurements the lamp was fed with a pure modulation were used to analyse the differences amoung modulating schemes were: frequency modulation with a sine, ting-signal and a double sideband modulated signal which was
96
HF-current. Four different kinds of the modulating patterns. The four triangle and white noise as modulamodulated with white noise.
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
P.J.M. van Gils 12 december 1996 Smartcell, modulaire amplifier prof.ir. J. Rozenboom, J. Coenders (Philips)
Rapport nr. EMV 96-13
Summary A novel implementation of paralleling converter modules without a master is porposed and analysed. Generally, the paralleling of power-converters offers a number of advantages over a single high power, centralised power supply. Performance-wise, the advantages include higher efficiency, better dynamic response due to a higher frequency of operation. System-wise, paralleling allows for redundancy implementation and expandability of output power. A simple, high efficiency, "autonomously" dynamic (DC/AC) current sharing module is proposed and implemented. Each module operates either as a stand-alone unit or as a parallel module. A converter module consists of a half-bridge with an inductor and a capacitor plus a control circuit. The half-bridge control the current in the inductor, which is set to critical discontinuous. Due to switching at zero current, high efficiency is obtained. Drawback is the relatively large current ripple. Each module contains a synchronisation circuit, which synchronises the phase of mutiple modules, to reduce the ripple at the common output. This synchronisation circuit uses a single wire (plus return) between the modules to provide an equal phase and current distribution. Initial conditions and disturbances determine the mutual position. The proposed circuit is verified for two modules of 125[W] each. The stability of the synchronisation circuit for two DC/AC amplifiers is experimentally demonstrated.
97
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M. Angenent Rapport nr. EMV 96-16 29 augustus 1996 Ontwerp van het Multi-Purpose Memory (MPM) voor het PhyDAS meetsysteem prof.dr.ir. A.J.A. Vandenput prof.dr.ir. K. Kopinga (fac. N) ing. J.A.M. Verhagen (Prodrive B.V.)
Samenvatting Het PhyDAS meetsysteem, dat ontwikkeld is en op ruime schaal toegepast wordt bij de faculteit Technische Natuurkunde van de Technische Universiteit Eindhoven, evolueert door de vooruitgang in de techniek. Bij een groot aantal experimenten wordt, door de voortschrijdende technische mogelijkheden bij real-time data-acquisitie, de dataflow zodanig groot dat gebruik moet worden gemaakt van de nieuwste technieken om deze toenemende dataflow aan te kunnen: sneller en compacter geheugen, front-end processing faciliteiten en zeer snelle transferpaden. Het is dan ook noodzaak om na verloop van tijd (afhankelijk van de wensen van de experimentatoren en de stand van de techniek) nieuwe modules voor het PhyDAS meetsysteem te ontwikkelen. Het PhyBUS 8 Mbyte Dual Ported Static Memory is een PhyDAS-geheugenmodule die voor diverse toepassingen nog zeer geschikt is, maar die niet berekend is op de nieuwste toepassingen c.q. ontwikkelingen van het Phy-DAS meetsysteem. In het afstudeerverslag wordt de ontwikkeling van een nieuwe geheugenmodule, het Multi Purpose Memory (MPM) besproken. Deze module is berekend op de nieuwste toepassingen en maakt gebruik van de laatste technische ontwikkelingen. De MPM vervangt tevens het 8 Mbyte Dual Ported Static Memory. Na een algemene inleiding volgt in het verslag een beschrijving van enkele toepassingen van het PhyDAS meetsysteem, onder andere een toepassing bij de groep EMV van de faculteit Elektrotechniek (de stroomgeregelde invertorgevoede asynchrone machine), waarbij duidelijk wordt waarom er behoefte is aan een nieuwe geheugenmodule en waar deze aan moet voldoen (datareductiemogelijkheden en een flexibele opzet). De MPM is te bedrijven in verschillen modes, afhankelijk van de toepassing. Deze MPM modules kunnen globaal worden onderverdeeld in drie categorieen: De MPM toegepast als interleaved-geheugen De MPM toegepast als front-end processor De MPM toegepast als real-time regelsysteem Deze drie categorieen kunnen weer onderverdeeld worden, afhankelijk van de toegepaste transferpaden voor data-import en data-export. Uiteindelijk zijn er 10 MPM-modes. Deze modes vormen de basis van het ontwerp. Startend vanuit deze modes wordt de globale opzet van de MPM behandeld. Hierbij komt onder andere naar voren dat de MPM aile transferpaden van het PhyDAS meetsysteem ondersteunt (PhyBUS, Dual Sub BUS en het PhyPAD). Tevens is de MPM uitgerust met front-end processing faciliteiten in de vorm van TIM-DSP modules (gestandaardiseerde modules van Texas Instruments, voorzien van een Digitale Signaal Processor en een stuk geheugen) en met een interface voor "dedicated logica" in de vorm van een Piggy-back om ook toekomstige toepassingen te kunnen ondersteunen. Als laatste deel van het ontwerp komt de controle van de MPM aan bod, een stuk dynamische logica dat ge"implementeerd is in EPLD's. De belangrijkste conclusies zijn: Door de komst van de MPM is het PhyDAS meetsysteem uitgerust met een zeer krachtige module voor front-end processing, zodat de steeds grotere datastromen in een vroeg stadium bewerkt kunnen worden. Door deze datareductie blijven uiteindelijk kleine (hanteerbare) datastromen over die door de host-computer verwerkt moeten worden. Door de flexible/modulaire opbouw van de MPM (front-end rekencapaciteit in de vorm van TIM-DSP modules, geheugen in de vorm van TIM-geheugen modules en een Piggy-back interface voor dedicated hardware) kan de MPM eenvoudig aangepast worden voor specifieke (toekomstige) aanpassingen.
98
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Atstudeerhoogleraar: Begeleiding:
G.J.F. Heijnen Rapport nr. EMV 96-14 29 augustus 1996 Sensorloze veldorientering door middel van een combinatie van indirecte veldorientering en een spannings/stroom-model prof.dr.ir. A.J.A. Vandenput dr.ir. J.J.A. van der Burgt
Samenvatting Om de kortsluitankermotor te gebruiken als aandrijfmachine in systemen waar een goed dynamisch gedrag vereist is, zijn complexe regel-en besturingssystemen noodzakelijk. Het is theoretisch gezien zeer goed mogelijk om dit machinetype op dezelfde manier te regelen als een gelijkstroommachine. Realisatie van deze theorie (veldorientering vereist dat men weet waar in de machine de magnetische flux-as zich bevindt. Om deze as te bepalen kan men gebruik maken van fluxsensoren die in de machine geplaatst worden. Omdat plaatsing van dergelijke sensoren de mechanische robuustheid van de machine sterk vermindert en vaak niet mogelijk is in standaard machines, wordt gebruik gemaakt van machinemodellen waarmee uit gemeten klemgrootheden (spanning en stroom) de positie van de magnetische flux-as bepaald wordt. De nauwkeurigheid van de geschatte grootheden is niet aileen athankelijk van hoe goed de structuur van het model de werkelijkheid benadert maar oak van hoe goed de geschatte machineparameters overeenkomen met de werkelijke waarden. De parametergevoeligheid blijkt vooral bij lage toerentallen erg groat te zijn. Veel onderzoek wordt dus oak verricht naar het verbeteren van de schatting van de positie van de flux-as bij lage toerentallen. In dit verslag worden twee methoden nader geanalyseerd. Maakt men geen gebruik van een toerentalsensor dan zal bovendien het toerental met behulp van de klemgrootheden geschat moeten worden. Er wordt een methode voor een sensorloze veldgeorienteerde machine behandeld die door Ohtani is voorgesteld. Daarnaast wordt een vergelijking gemaakt met een door Bonanno voorgestelde methode die afgeleid is van Ohtani's methode. Beide methoden zijn gebaseerd op de indirecte veldorientering en maken gebruik van hetzeltde tluxmodel. Ohtani's regeling bevat twee Pl-regelaars, een voor de regeling van de rotorsnelheid en een die samen met het fluxmodel de rotorsnelheid schat. De geschatte rotorsnelheid wordt door de regelaar zo aangepast dat de grootte van de gewenst waarde van de koppelvormende component van de statorstroom gelijk wordt aan de waarde die berekend wordt met behulp van het fluxmodel. De koppeling tussen beide regelaars maakt de regeling complex. Bonanno omzeilt het atstellen van beide regelaars door de rotorsnelheid op een directere wijze te bepalen. Hij schat de rotorsnelheid namelijk door gebruik te maken van de geschatte statorfrequentie en de geschatte sliptrequentie die beide bepaald worden met behulp van het tluxmodel. De geschatte rotorsnelheid wordt hierbij zo aangepast dat de geschatte slipfrequentie gelijk wordt aan de gewenste waarde. Er is gekeken naar de werking van beide modellen bij lage rotorsnelheid. Met betrekking tot de parametergevoeligheid werd aileen de invloed van afwijkende waarden van de stator- en rotorweerstand onderzocht. Het blijkt dat met Ohtani's methode het mogelijk is om, onafhankelijk van de slipfrequentie, bij rotorsnelheid nul de machine in een perfecte veldgeorienteerde toestand te krijgen. Voorwaarde is wei dat de geschatte statorweerstand niet grater is dan de werkelijke waarde. Voor Bonanno's methode geldt dat, bij rotorsnelheid nul, de grootte van de slipfrequentie wei een rol speelt en dat bij een grotere sliptrequentie de methode beter werkt maar dat geen ideale veldorientering wordt verkregen. Een afwijkende modelwaarde van de rotorweerstand ten opzichte van de echte waarde heeft, bij beide modellen, geen invloed op de veldorientering en zorgt aileen voor een tout in de schatting van de rotorsnelheid. Omdat deze tout ondermeer athankelijk is van de belastingsgraad zal procentueel gezien bij lage snelheden en een groat lastkoppen de tout het grootst zijn.
99
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
R.R.W. de Jager Rapport nr. EMV 96-23 17 oktober 1996 Simulation of quarter-car model by analogue electronics prof.dr.ir. A.J.A. Vandenput ir. P.J.M. Smidt
Summary Eindhoven University of Technology develops in co-operation with OAF Trucks, Contitech and Monroe, an advanced controller for a semi-active rear axle suspension. This controller uses the simulations of a so-called quarte-car model to predict the optimal damper sequence out of all possible damper settings. The simulations of the quarter-car model showed to be very time-consuming. A DSP (Digital Signal Processor) was even too slow. The idea was born to create an analogue circuit of the quarter-car model to speed up the calculation time. The quarter-car model described with two linear differential equations is first implemented in MATLAB/Simulink. An amplitude and time scaling is carried out to fit the amplitudes of the signals in the 24V voltage supply. Before creating the linear analogue simulation circuit, a step-up configuration is created of the scaled Simulink model in a CAD program for electronics called PSPICE. To pin-point errors in an early stage of development, the linear analogue circuit is then checked on deviations with respect to the Simulink model. The linear damper, spring and tire characteristics are replaced with their non-linear counterparts to create a realistic simulation model. To meet an even more realistic simulation model and to predict the optimal damper sequence, the analogue simulation circuit is extended with a switching damper provided with its specific characteristics. At last the initial conditions are to be set to account for the initial values from the preceding simulation. With this complete analogue simulation circuir, it is possible to simulate the quarter-car model and predict the optimal damper sequence out of all possible damper settings. A hybrid model can then be developed to allow for a simple micro-controller. In this way a fast and low cost simulation model can be created, where the interactions between the analogue simulation circuit and the vehicle are performed by the micro-controller.
100
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
P.B.A. Jansen Rapport nr. EMV 96-15 29 augustus 1996 Een gasturbine-aangedreven generatorset voor maritiem gebruik prof.dr.ir. A.J.A. Vandenput dr.ir. L.J.J. Offringa ir. J.H. Ebels (KLM Den Helder)
Samenvatting Een sneldraaiende generator welke direct, zonder tussenschakeling van een tandwielkast, is gekoppeld met een gasturbine zal worden gebruikt voor de opwekking van· elektrisch vermog~n. ondermeer aan boord van schepen. Het afstudeerwerk is onder te verdelen in: a. Een beschrijving van de technische specificaties van het door de groep Elektromechanica Vermogenselektronica (EMV) ontwikkelde systeem met een sneldraaiende generator. b. Een vergelijking van een diesel-generator en een gasturbine aangedreven generator voor maritiem gebruik in de vermogensklasse 1400 kW. Zowel de permanent-magneet generator als de koppeling aan een net stellen eisen aan de vermogenselektronische omzetter tussen de generator en het net. De uitwerking van deze eisen op de vermogenselektronica wordt onderzocht. De verwachting is dat dit ontwerp voordelen oplevert op het gebied van rendement, gewicht, volume, regeling, enz. Er wordt een vergelijkende studie gedaan tussen door dieselmotoren en gasturbine aangedreven generatoren in de vermogensklasse van 1400 KW. De vergelijking richt zich op de bovengenoemde. verwachte voordelen, waarbij ook andere zaken aan bod komen zoals onderhoud, geluidsafscherming, brandstof, milieu-eisen en andere onderwerpen welke specifiek met maritiem of militair gebruik samenhangen. De beschrijving van het totale systeem wordt mede gebaseerd op gegevens welke beschikbaar kwamen bij de ontwikkeling van de sneldraaiende generator van 1400 kW en bij metingen aan een schaalmodel van 80 kW in het laboratorium van EMV.
101
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
P.J.M. Julicher Rapport nr. EMV 96-19 29 augustus 1996 Multilevel convertor ten behoeve van magnetic resonance imaging prof.dr.ir. A.J.A. Vandenput dr.ir. L.J.J. Offringa dr. J.L. Duarte ir. W. van Groningen (Philips Medical Systems)
Samenvatting Reeds lange tijd is het mogelijk om afbeeldingen van organen in het menselijk lichaam te verkrijgen door toepassing van Magnetic Resonance Imaging (MRI). De voor MRI noodzakelijke magneetvelden ontstaan uit het aansturen van spoelen met pulsvormige stromen. De voor de opwekking van deze stromen benodigde hoogfrequent schakelende vermogenselektronische omzetters zijn in de afgelopen jaren voor steeds grotere vermogens ontwikkeld. Thans is echter de grens van het met conventionele technieken haalbare vermogen bereikt. Het voor MRI toepassingen gewenste vermogensbereik overschrijdt deze grens ruimschoots. Gedacht moet worden aan een gewenst spannings-en stroombereik van 1200V, 600A. Diverse fabrikanten, waaronder Philips Medical Systems, doen onderzoek naar de toepasbaarheid van nieuwe technieken welke theoretisch tot het gewenste grote vermogensbereik kunnen leiden. Dit onderzoek is tot stand gekomen uit een samenwerkingsverband tussen Philips Medical Systems en de vakgroep Elektromechanica & Vermogenselektronica van de TUE. Doel van het ondeJ?:oek is het aantonen van het al dan niet geschikt zijn van de in de begin jaren '90 gepresenteerde Imbricated Cells multilevel convertor voor MRI-toepassingen. Een groot deel van het onderzoek is gewijd aan een beschouwing van stabiliteit van deze convertor. De three-level Imbricated Cells convertor bevat een hulpcondensator welke onder aile omstandigheden opgeladen moet blijven tot de halve voedingsspanning. Gebleken is dat stabiliteit van deze hulpcondensatorspanning in stationaire toestand kan worden bereikt door toepassing van een specifieke besturing van de schakelaars in de convertor. lndien een externe verstoring een wijziging in de hulpcondensatorspanning doet ontstaan, zal deze spanning in het algemeen terugkeren naar de gewenste waarde. Deze terugkeer wordt veroorzaakt door een in de diverse publikaties "natural balancing" genoemd effect. Gebleken is dat in hoogfrequent schakelende omzetters geen nuttig gebruik gemaakt kan worden van dit effect. Dit euvel kan verholpen worden door de Imbricated Cells convertor uit te breiden met conventionele Neutral Point Clamped (NPC) technieken. Aangetoond wordt dat ten gevolge van deze uitbreiding stabiliteit van de convertor kan worden gegarandeerd. Uit een beschouwing van de tijdens een verstoring optredende stromenloop in de NPC-Imbricated Cells convertor werden extra dimensioneringseisen voor de halfgeleiders afgeleid. Verder werd veel aandacht besteed aan het ontwerp en de bouw van een geschikte besturing voor een prototype three-level halve brug convertor. Hiermee is het mogelijk de voor MRI veelgebruikte trapeziumvormige stroompulsen na te bootsen. De gebouwde besturing realiseert in de inductieve belasting van de halve brug een periode, trapeziumvormige belastingsstroom. Omdat hoofdzakelijk het schakelgedrag van de convertor van belang is, wordt volstaan met een open Ius besturing. Het principe van de besturing is gebaseerd op pulsbreedte modulatie. Metingen aan de op conventionele wijze gebouwde prototype three-level halve brug convertor tonen de stabiliteit van het systeem aan. De betreffende metingen werden verricht op lage spannings-en stroomniveaus (300V, 1A}. Verhoging van deze niveaus leidt, vanwege de hoge schakelfrequentie (25kHz) en de slechte lay-out van de convertor, tot aanzienlijke EMC storing in de meetopstelling. Ook reverse recovery effecten in de convertor spelen nu een belangrijke rol. Derhalve zal in de nabije toekomst een inductie-arme lay-out voor de convertor worden ontworpen, gebaseerd op de bus-bar technologie. Hiermee wil men dan een spannings- en stroombereik van 1200V, 150A realiseren. Concluderend kan gesteld worden dat de combinatie van NPC- en Imbricated Cells multilevel topologien in principe geschikt is voor MRI toepassingen. Allesbepalend voor het realiseren van een groot vermogensbereik is de lay-out van de convertor. Derhalve is er voor een praktische realisatie van een prototype nog veel werk te verrichten.
102
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
E.C.J. Moerkens Rapport nr. EMV 96-18 29 augustus 1996 Verificatie van een thermisch model van kortsluitankermachines prof.dr.ir. A.J.A. Vandenput ir. R.W.P. Kerkenaar ir. J.G. Sloot
Samenvatting De beveiliging tegen overbelasting van kortsluitankermachines is voornamelijk gebaseerd op een simulatie van het thermisch gedrag door middel van bi-metalen strookjes in thermische relais. Als alternatief kan men thans deze simulatie op een digitale wijze realiseren door gebruik te maken van computers. In het CLINK-motorbeveiligingssysteem van HOLEC wordt een model toegepast, waarin de gesimuleerde temperatuur gekarakteriseerd wordt door twee onderscheiden tijdconstanten. De parameters van het model, bestaande uit twee warmtecapaciteiten en twee warmtegeleidingen, dienen bepaald te worden op basis van gegevens,v die voor aile machines beschikbaar zijn, zoals kenplaatgegevens. De twee warmtebronnen in het model worden uit de momentane statorstroomwaarden bepaald. Het model dient binnen het kader van de bestaande normen de machine volledig te beveiligen. Van het model is de gevoeligheid onderzocht voor variatie in de modelparameters, uitgaande van een set modelparameters, geldig voor de opwarming van een nominaal belaste machine. Als aanzet tot een verificatie van het model zijn termperatuurmetingen uitgevoerd. In een eerder onderzoek zijn voornamelijk metingen uitgevoerd aan machines onder nominale condities, waarbij Pt1 00-sensoren zijn toegepast. In dit onderzoek zijn de metingen toegespitst op kortsluitankermachines met een geblokkeerde rotor. Hierbij is gekozen voor het gebruik van thermokoppelsensoren, gezien de snelle responsietijd van dit type sensor. Er is meetelektronica met tien temperatuur- en drie stroommeetkanalen ontworpen. De bijbehorende software verzorgt de bemonstering van de meetkanalen en de inlezing van de meetdata in een pc. Op een drietal gesloten kortsluitankermachines zijn blokkeerproeven uitgevoerd, waarbij de stromen in de drie statorfasen en temperaturen op een aantal plaatsen in de machine gemeten zijn. Hierbij zijn zowel de nominale als verlaagde voedingspanningen gebruikt. Het gevoeligheidsonderzoek wees op een sterke afhankelijkheid van het model voor variatie in de warmtegeleiding die het warmtetransport van de machine naar de omgeving karakteriseert. Deze modelparameter is in fysisch opzicht afhankelijk van de omgevingscondities, waarin de machine is opgesteld. De hoogste temperatuurstijging in een geblokkeerde machine treedt op op een rotorstaaf. De temperatuurstijging van deze zogenaamde "hot-spot" blijkt niet adiabatisch te verlopen. De parameters van het model blijken niet algemeen geldig te zijn, maar afhankelijk van de bedrijfssituatie. De overheersende tijdconstante van een geblokkeerde machine ligt in de orde van seconden, terwijl bij een nominaal belaste machine de kleinste tijdconstante enkele minuten is. Het blijkt weinig zinvol een model met twee tijdconstanten toe te passen voor een geblokkeerde machine.
103
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
C. Rijksen Rapport 25 april1996 Simulink-model van synchrone generatoren in bedrijfsomstandigheden prof.dr.ir. A.J.A. Vandenput ir. R.W.P. Kerkenaar ir. W.R.C. Mees (Holec Ridderkerk)
nr. EMV 96-05
Samenvatting In dit verslag is een aantal dynamische modellen voor simulatie van een synchrone machine in normale en bijzondere bedrijfstoestanden beschreven. Met de modellen kan men verschijnselen in het subtransiente, het transients en het stationaire gebied simuleren. Aan de hand van een aantal veronderstellingen zijn de vergelijkingen voor de synchrone machine afgeleid. Na invoering van een per-unit systeem zijn de machinevergelijkingen omgeschreven tot een generatormodel (zonder regelingen). Vervolgens zijn de verschillende onderdelen van het generatormodel in Simulink ge'implementeerd. Met het generatormodel zijn de volgende bedrijfstoestande~ gesimuleerd: * nullast; * een drie fase kortsluiting vanuit nullast; * een twee fase kortsluiting vanuit nullast; * een een fase kortsluiting vanuit nullast. Na toevoeging van een model van de bekrachtiging aan het generatormodel zijn modellen ontwikkeld voor simulatie van: * het in- en afschakelen van de belasting; * het in- en afschakelen van een deel van de belasting; * een stap op het instelpunt van de bekrachtigingsspanning van de generator; * een drie fase kortsluiting van een voorbelaste generator; * een twee fase kortsluiting van een voorbelaste generator; * een een fase kortsluiting naar aarde van een voorbelaste generator. Er is een dynamisch model afgeleid van de asynchrone machine voor simulatie van het starten van een asynchrone machine op een in eilandbedrijf werkende generator. Aile simulatiemodellen geven aannemelijke resultaten. Bij de simulatie is gebruik gemaakt van een realistische verzameling parameters voor de generator, de bekrachtigingsregeling en de belastingen. Om het gebruik van de modellen eenvoudiger te maken is in Matlab een grafische userinterface opgezet.
104
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
F.M. Roes Rapport nr. EMV 96-07 25 april1996 Field oriented control of induction machines using end ring current detection prof.dr.ir. A.J.A. Vandenput dr.ir. J.L. Duarte
Summary The main problem in field oriented control of induction machines is locating the flux in the machine. Once the position of the flux is known, field oriented control becomes easy. At the end of 1994, a new approach of locating the rotor flux was presented by Matsuo et all. With this method it is possible to locate the flux under all conditions, even down to zero speed. In this thesis, the results of a research concerning that new method are presented. Some interesting new points are described, which are crucial in determining the flux position accurately. The rotor current is indirectly measured by means of Hall sensors. A set of 120 degrees displaced Hall sensors were placed in the vicinity of the end ring. This only takes minor mechanical adaptions on the machine. The Hall sensors measure the flux which is produced by the current in the end ring. As soon as the current in the end ring is known, the currents in the rotor bars are also known. Together with the measured stator currents, the position of the rotor flux can now be determined accurately. This method is independent of machine parameters and temperature, and is therefore very robust. This method is applied on a 2.2 kW, 220 V, 2 pole, squirrel cage induction machine. In contrast with the reported measurements of Matsuo et all {and his predecessors), measurements have shown that the air-gap flux has a substantial contribution in the Hall sensor signals. That effect causes inacceptable deviations in the rotor current measurement. Therefore, the method in the form as proposed by Matsuo is not useful for small machines. A compensation scheme for the air-gap flux contribution has been added. The new scheme gives very good results and is robust. The rotor flux position can accurately be determined over the full speed range, even down to zero speed. Deviations are less than 5 degrees in comparison with the calculated rotor flux position. The new rotor flux position scheme is used in a field oriented controlled drive. The controller is implemented on a Digital Signal Processor {TMS320C30) and a hysteresis current controlled PWM inverter is used to create the desired currents in the machine. All tests confirm that the rotor flux position is exactly known at all conditions. The drive shows excellent dynamc properties, even during very slow speed reversal.
105
.----------------------------
106
VAKGROEP INFORMATIE & COMMUNICATIESYSTEMEN
107
LEERSTOEL DIGITALE INFORMATIESYSTEMEN
109
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
D. van de Meulenhof Rapport nr. EB 636 12 december 1996 Evaluation of a high-speed wireless ATM based LAN prof.ir. F. van den Dool dr.ir. P.F.M. Smulders
Samenvatting The evolution of broadband networks is characterized by the integration of services as well as the need for mobility and flexibility. Within the framework of the Advanced Communications Technologies and Services project "MEDIAN" a high-speed wireless broadband Customer Premises Network/Local Area Network for professional and residential multimedia applications is developed. The objective of MEDIAN is to evaluate the performance of such a system through a requirement study, analysis and simulation of parts as well as an overall simulation. The MEDIAN system is characterized by: • • • •
the the the the
60 GHz technology which was up to now applied for military purpose; Time Division Multiple Access-oriented technique, based on the Time of Expiry principle; unprecedented aggregate maximum throughput of a 150 Mbit/s. Asynchronous Transfer Mode, offering broadband capability.
Part of the research focuses on the development and implementation of a pilot system, called the MEDIAN demonstrator. The MEDIAN demonstrator is used to show the performance of such a system in real-user trials. The research presented in this Master's Thesis evaluates the problems concerning the development of a wireless Asynchronous Transfer Mode based Local Area Network, fixed on the MEDIAN demonstrator. The MEDIAN demonstrator, as considered in this thesis, consists of two portable stations and a base station. The transmission of information between the portable stations will take place via the base station according to the principles of ATM. Additionally, the base station will be connected to the fixed ATM network in order to demonstrate inter-operability with the environment. In order to come to a conceptional division of functionalities of the MEDIAN demonstrator, a reference configuration is derived. In addition, the organization of the functionalities within the demonstrator is considered. In this context the MEDIAN Protocol Reference Model is introduced. Finally, a high-level functional description of the MEDIAN demonstrator is presented.
110
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
J.P.M. van Osch Rapport nr. EB 638 12 december 1996 Performance evaluation of distributed databases with application to Intelligent Networks prof.ir. F. van den Dool drs. T.A.B. Nauta (KPN Research} dr.ir. J. van Tilburg (KPN Research}
Samenvatting Presently, various new services are exploited at the marketplace within the framework of the liberalisation and regulation of the telecommunication market. New strategies are developed to create and introduce advanced services. The Intelligent Network (IN} provides these possibilities besides management and control, and administers service and subscriber data in a database. If the storage capacity of this centralised database becomes insufficient, new methods for data preservation must be considered. A distributed database is reckoned as a conceivable alternative. The objectives that are identified in the graduation report are: • Evaluate the performance of a distributed database constituted of low-end computing engines and managed by a commercial database management system; • Study the feasibility of distributed databases for application in the Intelligent Network (IN}; • Investigate the application of a Common ORB Architecture (CORBA} as the connection platform between the Service Control Point (SCP} and Service Data Points (SOPs). Oracle has two techniques to support distributed databases: replication and fragmentation. Replication strategies update modified data, and fragmentation optimises the structure of tables to minimise the performance degradation that is inflicted by the replication, and improve database access. Replication strategies exist of two classes: synchronous and asynchronous replication. Synchronous replication strictly maintains data consistency, but needs a considerable amount of time to write data. Asynchronous replication is more flexible, but consistency of data is not guaranteed. The method to approach the distributed database uses an implementation of the CORBA standard, Orbix. CORBA separates database operations from actual implementation, and provides distribution transparency from the SCP's point of view. The operations are activated through an interface, designed in an interface definition language, and invoke the requested database transactions. The performance, specified as throughput and response time, is measured for the different replication strategies, the size of the database, and the behaviour of two IN-services, namely NPTS and NACO. An elaborate description of the measurement results has been supplied, and an extrapolation of the results has been made. Finally, conclusions and recommendations complete the report.
111
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt:
Afstudeerhoogleraar: Begeleiding:
0. Sies Rapport nr. EB 611 25 april1996 Automatic techniques for protocol conformance testing Study of automatic test generation and test execution applied to the Apple P1394 LinkCore prof.dr.ir. C.J. Koomen prof.dr.ir. L.M.G. Feijs dr. R.L.C. Koymans (Philips Semiconductors)
Summary At present, labor involved in testing might take up to 50% of total efforts in the development of communication protocols. Even a marginal decrease of this testing effort is likely to amount to significant savings. A lot of research has been done on the development of methods for automated conformance testing for communication protocols. Formal, computer processable, models captured from the applicable standard, serve as basis for test generation. The graduation report presents a method and tools which have been developed for the purpose of automated test generation and test execution for VHDL models of protocol implementations. The PTT Conformance Kit has been used for the production of abstract test cases in ISO-TTCN. The approach has been worked out for a VHDL implementation of the link layer of the IEEE P1394 protocol (a high speed serial bus protocol). A VHDL run-time environment to execute these abstract test cases has been developed.
112
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M.C.M. Muijen Rapport nr. EB 618 27juni 1996 SmartScan: A new approach to the partial scan problem. prof.ir. M.T.M. Segers ir. E.J. Marinissen (Philips Research Lab.) ir. P.W.M. Merkus (Philips Research Lab.)
Summary Full scan is an attractive and often-used Design for Testability technique, but has several costs connected to it. Partial scan attempts to reduce those costs. Most partial scan methods do so by trading in some of the benefits of full scan. Smartscan is a partial scan method that reduces the costst of full scan while retaining all its benefits: (1) excellent fault detection capability, (2) efficient test pattern generation through Combinational ATPG, and (3) automated implementation.
113
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
F.J.P. Peters Rapport nr. EB 626 29 augustus 1996 A comparison of realistic defect coverages for Voltage and 1000 measurements prof.ir. M.T.M. Segers ir. S. Oostdijk (Philips Nat. Lab.)
Samenvatting As IC quality demands increase, the S.A. model becomes insufficient. Especially with regard to the most important realistic defect models, i.e. the Bridging defect model and the Six Transistor Short model, the behaviour of the defects cannot be mapped on an S.A. fault. Simulation of all realistic bridges and six transistor shorts within library cells has been performed to determine fault detection tables and defect coverages. It is shown that the detection of a number of defects cannot be guaranteed by a voltage test. In order to verify the use of fault detection tables on complete ICs, an analysis of a larger circuit has been made. Current test strategies are evaluated for their realistic defect coverage. The trade-off between voltage and 1000 testing is determined. It is shown that in order to guarantee a high coverage of realistic defects, 1000 testing is required. The S.A. model and voltage testing perform rather poorly. Defect coverages for circuits are about 90-98% for 1000 testing, while the guaranteed defect coverage for voltage testing is only about 50%. The fault detection tables can be used to take the overlap of voltage and 1000 testing into account and to predict defect coverages for complete ICs. In order to reach 0 ppm, more than 10-20 1000 vectors are required and the use of fault detection tables within ATPGs will be needed. More research should be performed to enable faster 1000 measurements and to enable the use of the fault detection tables in ATPGs.
114
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
W.A.H. Berkvens Rapport nr. EB 625 29 augustus 1996 Implementation of a bit processor as part of a Controller Area Network protocol processor prof.ir. M.P.J. Stevens dr.ir. A.C. Verschueren
Summary The master's thesis describes the hardware implementation of a bit processor and its interface to a byte processor as part of a CAN protocol processor. A CAN protocol processor can be used as a node on a CAN bus. The bit processor is designed conform the Controller Area Network {2.08) specifications defined by ISO. The bit processor is a totally new design only using the structure of the communication lines of an existing SDLC/HDLC controller. A model for the mapping of the CAN specifications on hardware building blocks, which had already been created, was used as a starting point. The implementation of the bit processor is subdivided into eight units each with its own specific task: • there is a PLS (Physical Signalling) unit that takes care of the bit timing and the synchronisation. • a Management unit that consists of three sub units: the Control unit controlling the state of the bit processor, the Compare unit used to compare the value set on the bus with the received value from the bus and the FCE (Fault Confinement Entity) unit that is used for error detection. • a BitStuff unit used for the insertion of stuff bits and a BitDestuff unit used for the removing of stuff bits. • the Tx and Rx unit control the transmission and reception of the bits of a frame. • a CRC unit that handles the computing, transmission and checking of the CRC. • the BackEnd unit implemented as the interface between the bit processor and the byte processor. The implementation of the bit processor described in the master's thesis is completely working according to the CAN specifications and can be used in the CAN protocol processor. In the bit processor a start is made with the implementation of test hardware. The already implemented test functionality is described as well as some of the functionality that could easily be inserted in the future.
115
l Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
F.M.H. Clermonts Rapport nr. EB 624 29 augustus 1996 De ontwikkeling van een grafische PLC-programmeeromgeving prof.ir. M.P.J. Stevens prof.ir. M.P.J. Stevens
Samenvatting Het afstudeerverslag bevat een beschrijving van de ontwikkeling van een programmeeromgeving voor een PLC en valt uiteen in twee delen. Het eerste deel bevat een beschouwing over hoe zo een programmeeromgeving er dient uit te zien. Er is gekozen voor een programmering door middel van grafische symbolen (componenten). Analoog aan de digitale schakeltechniek representeren deze componenten bewerkingen op de signalen die op hun pinnen zijn aangesloten. Er is een beperkte set van componenten gedefinieerd waarmee een besturing kan worden gerealiseerd. Verder wordt er beschreven hoe er vanuit de grafische representatie van een PLC-programma de juiste code voor een PLC kan worden gegenereerd. Hierbij wordt vooral aandacht besteed aan de efficientie van de gegenereerde code. Het tweede deel van het verslag spitst zich toe op de daadwerkelijke implementatie van de programmeeromgeving. De implementatie is uitgevoerd in C++ en draait onder Windows 3.11. Bij de implementatie ligt de nadruk op object-georienteerd programmeren. Er wordt beschreven hoe een PLC programma in deze vorm kan worden gerepresenteerd. Verder wordt van de meest relevante objecten een beschrijving gegeven.
116
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
N.H. Ederveen Rapport nr. EB 628 29 augustus 1996 A Genetic Algoritm for solving the Minimal Input Support Problem prof.ir. M.P.J. Stevens dr.ir. L. Jozwiak
Summary Many problems, especially in the technical fields, include optimization. Whether ICs are developed (high clock frequency, small area) or planes have to be constructed (strong, light materials, aerodynamic), one or more objectives have to be optimized. Even in forecasting the weather or the stock exchange index, the goal is to minimize the differences between the predictions and reality. Frequently, no algorithms are available to find the best solution to such problems in an acceptable amount of time. Therefore, special algorithms have to be developed to find satisfactory solutions (so-called heuristic algorithms). Two drawbacks of this approach are the development time and the limited reusability. Recently, there has been a growing interest in the optimization possibilities offered by genetic algorithms. Although they cannot guarantee to find the truthfully best solution, they have shown their very powerful characteristics in a wide variety of problems. As the basic elements of the genetic algorithm remain the same for various problems, this approach overcomes the earlier mentioned drawbacks. These features sound very promising. The reason for the Section of Digital Information Systems at Eindhoven University of Technology to investigate some aspects of the genetic algorithms is the need to check whether they would be useful for solving some major design problems and how they should be used in such applications. Because the practical performance, instead of just theoretical background, is of high interest, the genetic algorithm had to be tried at some practical design problems. The minimal input support problem was chosen, because the problem has a clear structure, the comparison material from some other algorithms is accessible and there is a need to have a very effective and efficient algorithm for this particular problem. The task of the graduation assignment was to analyze the features of genetic algorithms, to develop and implement an effective and efficient genetic algorithm for the input support problem and to characterize the algorithm using practical instances of the problem. The minimal input support problem is concerned with reducing the number of inputs needed while maintaining the performance of the system. Without loss of generality, this problem has to realise a predefined Boolean function with as few input bits as possible. In the same section of the university, extensive research has been done on this topic and a very powerful heuristic method was developed. Therefore, results of the genetic algorithm can be compared with the results from this method. Genetic algorithms have borrowed some aspects from natural populations. A population contains several elements, each defining a possible solution. The worse the defined solutions are, the lower the corresponding elements are valued. Every generation a new population is formed. Elements of this new population (offspring) are based upon selected elements of the old population (parents). High valued elements have a higher chance to be selected for reproduction than low valued ones. There are several ways to create offspring. The basic genetic algorithm has three operators to choose from: a parent can be copied to form offspring, a very small part of the parent can be changed to form mutated offspring, and some equal parts of two parents can be interchanged in order to get two new solutions - this way of creating elements is called crossover. Every operator has its own probability to be chosen, even the involved parts are randomly selected by the operators. The new population will be treated just like the old one; elements will be selected, operators will be applied on them and a new population will be created. This process continues until a predefined stop condition is met. The genetic algorithm we have developed contains two extra operators to incorporate problem specific knowledge. The repair operator can be applied when an element defines a solution that is actually so bad, that the proposed input bits are not sufficient to maintain the Boolean function.
117
This operator will add just a few input bits to create an element that enables realising the functionality. The merge operator will combine the information of two elements and find a local optimal solution, that is added then to the new population. This genetic algorithm contains many probabilities, all of them influencing the final results of the tool. Therefore, we have tried to adjust the probabilities to fit the problem well. First, the probabilities of the crossover and mutation operator have been tuned in order to get the best result using just those operators (including copying). Thereafter the repair operator has been tuned, leaving the settings of the others the same. Finally, the merge operator has been adjusted. Using a test set of 20 Boolean functions, the probabilities for the operators to be applied did not prove themselves to be critical. As might be expected from a stochastic process, results varied within some ranges, making it very time-consuming to find one setting superior to another one. With the final settings, we have almost always found solutions with an equal or higher quality than the heuristic methods. For all but three of the 20 Boolean functions checked, we found the minimum solution (for three functions this meant an improvement of one input bit over the heuristic method). For the other three Boolean functions we never found the best results; we found one input bit more than was needed (in one case the best solution ever found was produced using the merge operator as a heuristic algorithm). In two of these cases the solution found still contained one bit less than the solution found by the already developed heuristic method. Just for one instance the heuristic method outperformed the genetic algorithm (needing one bit less). The repair operator has not been applied in the final algorithm, unless for elements with very promising solutions, but just a few input bits too short. The merge operator on the other hand, has shown to be a very powerful operator. As a matter of fact, using this operator without the genetic algorithm, just one time on the whole problem, we found it generating better results for four of the problems where the heuristic method did not find the optimum (but for four other problems needed one bit too many). The original genetic algorithm has been developed for test purposes; speed was not of main concern. Now the algorithm has shown its good performance, it would be useful to make it faster. As said before, the algorithm can easily be applied on new problems and therefore the benefits might also be found in future projects. There are still some other improvements possible. The probabilities to use an operator are fixed now. It would be useful to test the changing of the probabilities over the time. In the ideal situation, dependent on some population characteristics (diversity, average value of the solution), the probabilities should be adapted. A real stop condition has to be added too, as the current tests were just runs over 60 generations. For the minimal input support problem, it would be useful to extend .the current test set, so the performances of the possible algorithms can be researched better. The parameters are especially set for the current test set, and the heuristic method almost always finds an optimal solution. There is also a possibility to use the genetic algorithm quite differently in order to make the heuristic method perform as good as possible.
118
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoo_gleraar: Begeleiding:
J.D. van Felius 29 augustus 1996 From POOSL to C++ prof.ir. M.P.J. Stevens ing. P.H.A. van der Putten ir. J.P.M. Voeten
Rapport nr. EB 621
Summary To analyse, specify and design information-technology products, good methods for hardware/software co-design are needed. At the Information and Communication Systems group of the Eindhoven University of Technology, ing. P.H.A. van der Putten and ir. J.P.M. Voeten have developed an object-oriented design method called SHE (Software/Hardware Engineering). SHE incorporates the formal description language POOSL (Parallel Object-Oriented Specification Language) which is used to describe systems in a formal, unambituous way. The graduation report describes a method to implement POOSL descriptions in the object-oriented programming language C++ using a HP UX 9.05 system. The POSIX 1003.1c thread library is used to support concurrency. The global subject of this report is to investigate what problems will occur when a POOSL description is implemented in C++ and to develop a method to deal with these problems. The method consists of a library and a set of templates and rules to facilitate or automate future implementations of POOSL specifications in C++. Within this scope, the report especially focuses on the implementation of the communication principle of POOSL in combination with the 'OR' operator. The constructed library, which will be called the PEC (POOSL Extension of C++) library, makes it possible to implement the communication principles of POOSL, including conditional reception of messages. The developed method also describes a way to implement the choice statement with regard to communication requests. The PEC library does not yet offer the functionality to implement interrupt statements, abort statements nor guarded command statements. The concept of tail recursion is not supported either. Further, it is not possible to implement a choice statement where a choice has to be made between communication requests and noncommunication requests. The implementation of a POOSL data class in C++ is straightforward. There are no elements in the syntax of POOSL that require additional functionality in C++, for example offered by an additional library such as the PEC library. An example is worked out to test the method and the PEC Library. The result is a working implementation of the PAR (Positive Acknowledgement with Retransmission) protocol. The average number of rendez-vous actions that is performed during one second execution time is 606 (with a standard deviation of 63}.
119
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M.C.W. Geilen Rapport nr. EB 623 29 augustus 1996 Real-time concepts for Software/Hardware Engineering prof.ir. M.P.J. Stevens ing. P.H.A. van der Putten ir. J.P.M. Voeten
Summary The SHE method is a method for the specification and design of distributed communicating hardware/software systems. Part of this method is the formal specification language POOSL. POOSL is an object-oriented language for the specification of parallel communicating processes. In its current version, POOSL is unable to describe timing behavior. Since many of the systems that need to be designed are real-time systems, POOSL needs to be extended with timing primitives. Such an extension is studied in the graduation report. Existing real-time algebras and real-time programming and specification languages are studied. Important aspects of real-time specification are discussed and concepts are chosen as a basis for the extension of POOSL. The language POOSL is extended with time. The meaning of existing POOSL primitives in relation to time is investigated and a new primitive is added that can specify quantified timing behaviour: delay d. Further, the communication between processes is extended with timing to model the occupation of a channel during communication. After this, all necessary changes to the POOSL language are listed completely. The formal semantics of POOSL allow the use of tools such as verfication and behaviour preserving transformations. These tools are based on equivalence relations between POOSL specifications. These relations are defined for the new language. Furthermore, the relationship between timed and untimed specifications is investigated. It it possible to define an abstraction function that computes from a timed POOSI specification, its untimed equivalent. Finally, the expressive power of the new language is studied. A number of typical aspects of realtime are investigated, such as modeling computation time, modeling communication time, time-outs etcetera.
120
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
ing. G.T.C.J. Hansink Rapport nr. EB 629 29 augustus 1996 A converter from IDaSS design files to synthesizable VHDL prof.ir. M.P.J. Stevens dr.ir. A.C. Verschueren
Summary Nowadays, CAD plays an important role in designing Integrated Circuits. A CAD-tool developed in the section ICS (Information and Communication Systems) is IDaSS (Interactive Design and Simulation System). With this system a digital design can be made and simulated. For implementation of the design, which results in a chip, the design has to be "mapped" to a chip lay-out. There are commercial converters on the market and they use as input VHDL (Very High Speed IC Hardware Description Language), this is a language which describes the chip (hardware). Within IDaSS a design can be saved to a so-called DES-file (design-file), this file has to be converted to VHDL before it can be used for the silicon compilation (generating chip-layout). When ing. Hansink started the graduation project, not all the desired functionality was present. He was the fourth person and the adaptations to the program went on. The converter has been written in C and contains several functional modules. The missing functionalities were: • Correct making of datapaths from and to Finite State Machines (FSM) • Make it possible to control a system with more than one controller (FSM or control connector) • Adding semaphore to the behaviour of a register • Supporting multiple schematics • Correct conversion of the IDaSS-blocks: FIFO, LIFO, ROM, RAM and CAM • Supporting signals • Optimizing parameter nets The last three points are still not supported but the other functionalities are supported now. The program becomes larger and larger and it is getting more important to keep it structured and well documented. After expanding the functionality it is very important to test the changes of the converter because the job of getting out the bugs is very time consuming, especially when the bug is present in the code written by others.
121
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
E.A.M. Kuipers Rapport nr. EB 616 27juni 1996 Design of a RSA crypto-processor using a systolic array prof.ir. M.P.J. Stevens ir. E.G.H.M. Bormans (Pijnenburg) R. Joosten (Pijnenburg)
Summary The graduation report describes the design of a scalable RSA device, which is suited for public-key encryption and decryption according to the Rivest, Shamir and Adleman method [Riv77]. This design has been developed in the context of a graduation assignment at the section Information and Communication Systems, Faculty of Electrical Engineering, Eindhoven University of Technology. The assignment is characterized as follows: Design a parameterizable RSA cryption-processor which can be optimized on either chipsize or cryption speed. The goal is to achieve maximum flexibility, which allows the processor to be used in any environment using an optimal configuration. The RSA design is based on a modular multiplication core, which is a systolic array consisting of a numer of processing elements (PEs) that can be varied in number and size. The number and size of the PEs are parameters which can be used to configure the RSA design to optimally perform in its environment. For this purpose an existing multiplication algorithm has been adapted for systolic arrays, which results in a hardware PE design. In the graduation report the steps are described which are required to adapt the existing algorithm to an efficient algorithm suited for systolic arrays. All conditions that are required in order to prevent overflow or underflow are described. Further, a schematic of the systolic array is presented, which shows the data flow in PEs. Finally, a schematic of an RSA processor is presented, which is based on the multiplication core. The modular multiplication core has been simulated and functionally tested, from which can be concluded that the adpated algorithm is working correctly. The PEs of the systolic array have been described in VHDL and compiled to hardware design. These compilations show that either highspeed of low-area RSA cryption can be executed using different configurations of the flexible RSA design.
122
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
J. Lecluse Rapport nr: EB 607 25 april 1996 Design of a Smartphone with a Digital Signal Processor prof.ir. M.P.J. Stevens drs. H. Braams (Pijnenburg Electronic Products) K.J. Deist (Pijnenburg Electronic Products)
Summary At the company Pijnenburg Electronic Products one is interested to know whether the application of a Digital Signal Processor (DSP) in a telephone called Avenue is meaningful and possible. The DSP should be able to handle answering machine functions, modem functions for the consultation of a telephone directory, and a full-duplex handsfree speakerphone function. The Avenue is an advanced screenphone with a lot of special functions, like a telephone index, text telephony, cost counting and handsfree calling. Because the telephone works with low frequency components, a Signal Processor should be capable of handling these functions. First, the complexity of the different functions was examined. These functions are DTMF decoding, voice compression and decompression and echo cancellation. Next, General Purpose DSPs were searched for as well as their specifications and their prices. During this search, another kind of DSP called a Digital Tape less Answering Device (DTAD) was discovered. This device contains a DSP and already implemented Digital Signal Processing microcontroller. A number of companies are developing DTADs and they are becoming more powerful with a lot of special functions like voice recognition, etc. In order to obtain more information about DSPs, a company called CME was visited. This company, which is located in Veenendaal, helps other companies obtain information about Digital Signal Processing. The third possibility, i.e. the creation of an ASIC, was introduced there. With the aid of their computers and software one is able to make his own Custom Digital Signal Processor. The question where to get the software still remained, however. Creating our own software would take a long development time. Therefore, companies which offer "of-the-shelf" processing algorithms concerning our application were searched for. The companies that have been found offer some good solutions but are expensive. The conclusion at this moment, considering the prices for General Purpose DSPs and the third party software is that one could best use a DTAD from the DSP-Group, or the MSP58C85 from Texas Instruments. They are relatively cheap and there are possibilities for the implementation of Pijnenburg's CAS-algorithm within the DTAD.
123
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
W.C. van Leeuwen Rapport nr. EB 620 29 augustus 1996 Considerations and a proposition of the Analog Display Services Interface with a Smartcard Interface prof.ir. M.P.J. Stevens drs. H. Braams (Pijnenburg)
Summary In the near future smartcards will play an important role in the financial transaction business. The protocols to communicate with these smartcards are currently defined and will stabilize in the coming months. Different applications for smartcards will be developed. For example the integration of a smartcard reader into a consumer telephone will open interesting opportunities. Functions like teleshopping and payment for information services become possible. The actual defined smartcard standards, however, reckon that the financial application and the smartcard reader are integrated in one single device. For the integration of a consumer telephone and a smartcard reader this is not suitable. The graduation report defines a telecommunication interface, within the actually defined smartcard communication protocols. The security requirements for the financial smartcard transactions are taken into account. For the transmission of the defined interface commands over a telephone line, the Analog Display Services Interface (ADSI) will be used. The ADSI procotol defines a data communication protocol between a remote server and a consumer telephone. This protocol therefore seems suitable to transmit the transaction data between the financial server and the smartcard reader in the telephone. The implementation of the smartcard interface within ADSI requires some extensions of the ADSI protocol. These extensions are defined in the report. The physical layer of the ADSI protocol needs to be extended with a 1200 bits/sec return channel, the datalink layer needs additional error detection and correction capabilities and some dedicated messages are added to the message layer command set.
124
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
P.A.C.J. van Loon Rapport nr. EB 614 27juni 1996 Time Synchronising in Digital Audio Broadcasting Receivers prof.ir. M.P.J. Stevens ing. F.A.M. van de Laar (Philips CE) ir. A. Jongepier (Philips CE)
Summary In the current information era, all kinds of information is communicated. One method of communication is radio broadcast. The most used technical means until now has been AM and FM reception. Digital Audio Broadcasting or DAB is a digital radio standard for transmitting CD quality stereo audio together with all kinds of extra information. It could be called the multi-media radio, for also video and computer data can be transmitted via DAB. Furthermore, DAB is intended for mobile error-free reception that discards well-known reception problems like multi-path, fading and Doppler effects. The theory behind the Digital Audio Broadcasting system is investigated to provide a background for understanding time synchronisation in DAB receivers. A high-level system modelling and simulation tool, DSP Station is used to describe a DAB system of encoder, channel and decoder. This DAB system description is set up in order to study time synchronisation in DAB receivers. DSP Station is evaluated for its use as a high-level system modelling and simulation tool, based upon the experiences of the author al)d several other DSP Station users. Several aspects in DSP Station need improvements in order to get a full-fledged, time-saving, design-supporting CAD/CAE tool. Several methods for time synchronisation are looked at and this promising time synchronisation algorithm for implementation in a new This proposal has made it into the DAB V3 receiver chip-set that will year. Field tests then can show the better performance due to this used algorithms.
author has proposed a new DAB receiver chip-set. be available at the end of this algorithm, relative to previous
125
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
I. Mankoe 12 december 1996 Designing a simulator for POOSL prof.ir. M.P.J. Stevens ing P.H.A. van der Putten ir. J.P.M. Voeten
Rapport nr. EB 633
Summary Within the Information and Communication Systems Group at the Eindhoven University of Technology, active research is performed in the field of design methods and tools that support the development of complex systems. This research resulted in the object-oriented design methodology SHE (Software/Hardware Engineering). As a part of SHE the formal specification language POOSL (Parallel Object-Oriented Specification Language) has been designed. Starting from informal· graphical modelling the SHE method produces rigorous specifications described in the POOSL language. Once a POOSL specification of a system is made, it can be used to automate subsequent design steps. To do so, one has to be sure that the specification is correct. For this reason the need for a tool arises which can be used to check the correctness of a specification: a POOSL simulator. In the graduation thesis the first steps towards the design and implementation of a POOSL simulator are described. The mapping of POOSL onto the programming language Smalltalk and the conversion of the POOSL syntax into Smalltalk equivalents are the main subjects of this thesis. First, an inventory is made of the differences between the specification language POOSL and the programming language Smalltalk. This is followed by the mapping problems that result from these differences. Solutions for these problems are given, resulting in conceptual solutions for the POOSL simulator. Special attention is paid to the design and implementation of a mechanism in Smalltalk, which is used to simulate the communication between POOSL process objects. The conversion of the POOSL syntax into its Smalltalk equivalent is described in detail. Conversion rules for almost all process statements are given. Finally, the implementation and conversion rules are used to simulate a test design in order to demonstrate. the resulting implementation.
126
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
C.J.A. Minkenberg Rapport nr. EB 632 17 oktober 1996 Performance Simulations of the PrizmaPius Switch prof.ir. M.P.J. Stevens dr.ir. M.C.A.A. Heddes (IBM ZOrich Research Laboratory)
Summary The master thesis work .presented in the graduation report was carried out at the IBM Research Laboratory in the High Speed Networking Group Communications Systems Division. Among other projects, this group is currently working on a high-speed VLSI packet switching element, called PrizmaPius. This switch may be employed to build switching nodes in future broadband telecommunications networks, based on the Asynchronous Transfer Mode (ATM) technology. However, the switching element can also be used for other purposes. The switch incorporates several novel architectural features which make it very well suited for very high data rates. Very important measures of performance for a packet switch in general are the maximum throughput, average packet delay and packet delay variation Gitter). In order to be able to evaluate the effects of architectural changes on these performance criteria a performance simulation tool has been developed, which simulates switch behaviour at the packet level (modelling on a very high level). This report deals with extensions and modifications to this performance simulation model and corresponding performance simulations to verify their effects. Main points of attention were: • Traffic models, mainly IP-Iike bursty traffic and Constant Bit Rate traffic. • Backpressure priorities: A nested backpressure priority has been considered which allows different backpressure thresholds to be set for packets of different priorities. • Switch priorities: An output queuing scheme employing two output queues (eacti one associated with one so-called "switch priority") instead of just one to allow high priority traffic to overtake low priority traffic, has been evaluated also. • Single stage 32x32, single stage 128x128 and three stage 128x128 switching networks have been examined, each built up out of the basic PrizmaPius switch. The results of all these simulations have been processed, rendered graphically, and interpreted in the graduation report.
127
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
S. Slegers Rapport nr. EB 637 12 december 1996 The design and implementation of an external interface for a CAN/CAL controller prof.ir. M.P.J. Stevens dr.ir. A.C. Verschueren
Summary The master's thesis describes the graduation period at the Information and Communication Systems' group of the Eindhoven University of Technology. ICS is developing a CAN/CAL controller. This controller contains a bit- and byteprocessor for handling the CAN and CAL protocol. One side of the CAN/CAL controller is connected to the CAN bus, and the other side is connected to the outside world. My task was to build an interface between the CAN/CAL controller and the outside world. This interface had to provide the following capabilities: memory extension, master CPU interface, a test interface, and extra 10 ports. The primary goal of designing this interface was to minimize the number of pins and maximize its flexibility. This means that the user must be able to configure this interface. The test interface must make it possible to test the CAN/CAL controller by means of reading and writing internal registers and memories. The interface as designed and implemented in IDaSS offers the above described features. It offers 6 programmable 8-bit 10 ports. These ports become memory interface or CPU interface ports, when configured by the user. In addition, the interface has SIO pins and an external interrupt pin.
128
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: . Afstudeerhoogleraar: Begeleiding:
G.T.L.M. Verbeeten Rapport nr. EB 617 27juni 1996 A Telephony Metaphor - Designing a metaphor based GUI for CTI-applications prof.ir. M.P.J. Stevens ir. A. Suurmond (KPN Research) ir. B. Bos (KPN Research)
Summary Telecommunication environments built up from ISDN and/or PBXs are able to offer more ·advanced telephony services like transfer and conference calls. This telecommunication infrastructure is typical for an office environment. People could use these telephony services to manage their telephony communications in a more efficient and effective way, but they hardly make use of the advanced telephony services. The main problem is that it is difficult to invoke the services. People make mistakes or cannot remember the code sequence needed to invoke the desired service. The quality of a user interface is determined by the technical solution, the utility and usability. Utility means that the functionality of the application is tailored to the requirements of the user. In this report the utility is of less concern due to the fact that the telephony features are defined mainly by the telecommunication infrastructure. The main issue in this report is to find ways to improve the usability of the complex telephony services. Design guidelines are selected and used to develop a graphical user interface (GUI) that is user-friendliness. Beside the description of the accomplishment of the user interface there is also a description of the developed demonstrator. One of the present developments in the telecommunication environment is the integration of computers and telephony. The term Computer Technology Integration (CTI) refers to the integration of services either provided by telecommunication systems or provided by the computer. Basically, the integration is based on two CTI-architectures: the Standalone- and the Client/Server architecture. The Standalone architecture is based on a physical link between telephone and computer. The Client/Server architecture uses a virtual link to convey information from the telephony network to the computer network. The architecture determines to what extent the computer has call control capabilities. Dependent on the CTI-architecture the communication between the two environments is based on either a first party or a third party call control model. In a first party call control the initiator of the call is also necessarily a party to the call. In a third party call control a third party, a human being or a software agent, can establish connections on behalf of two or more other human beings or software agents. The Standalone architecture offers only first party call control capabilities and the Client/Server architecture uses the third party model. The first party call control model is sufficient to implement the advanced telephony services. The demonstrator is based on the Standalone architecture and uses the first party call control model to implement the telephony services. Nowadays, CTI-applications provide easy access to directory information to retrieve the telephone number and offer a user interface with quick access buttons which are used to activate complex telephone operations. Although people no longer have to remember the code sequences, the present screen based user interfaces are still hard to use due to the lack of visual information. Information about how to invoke the desired telephony service and which services are possible and which are available for the moment. Visual information includes emotional cues that motivate, direct or distract people. In the area of Human Computer Interaction (HCI) there are some theories about how people interact with the computer. One of the approaches is the cognitive theory which states that human beings are mental models which guide their actions and behaviour during interaction with the computer.
129
To help users develop an accurate mental model of how to invoke the telephony services the designed GUI makes use of a metaphor and makes active interaction possible. By using real-world objects to represent the working of the telephony system, people use experiences from previous learning to solve their problem and therefore learn more quickly and with less effort. The unique quality from the designed GUI, among other existing CTI-applications, is that the designed user interface uses a metaphor to assist people to invoke the desired telephony service. A room where people can meet and talk to each other without any device is basically the metaphor. A refinement of this approach is the office-room. The office metaphor offers a powerful display representation that gives the user adequate visual feedback which services are available. This is not only (visual) information about the present state but also information which telephony services are still possible to activate. The designed graphical user interface functions intuitively. It works the way it looks and it looks the way it works. A demonstrator is developed in the Service Creation Laboratory of KPN Research with the development tool Visual Basic. The demonstrator uses the office metaphor to support the following telephony services: call set-up & disconnection, calling line identification presentation, transfer calls, three party calling, put a call on hold and forwarding calls. The communication between the CTIapplication and the PBX is based on the Terminal Program Interface (TPI), a Philips specific protocol. User-interface design is not a rigorous process, it is not a science. During the design process the designer is confronted with subtlety of user-interface issues, like simplicity, learnability and userfriendliness. Often there is no scientific evidence to support one alternative over another, it is just intuition. To make a final judgement about the quality of the designed user interface it is necessary that there will be an independent usability test.
130
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
A.M.J. van Vught Rapport nr: EB 605 15 februari 1996 A.T.M. in a Distributed Computing Environment prof.ir. M.P.J. Stevens ir. A.G.M. Geurts
Summary The master's thesis presents a framework for using a network of workstations for a mixture of parallel and sequential jobs, connected by an ATM network. These workstations use standard operating system software, are equipped with off-the-shelf network interfaces, do not allow direct protected user-level access to the network, and use networks without reliable transmission or flow control. Asynchronous Transfer Mode (ATM) is an international telecommunications standard designed for broadband integrated services; it is also well-suited for use within local-area networks. ATM LANs can provide the networking support needed for communications at rates of 155 Mbits/sec and higher. The thesis first analyzes the protocol processing required to handle ATM communication. Based on this analysis, the architectural issues in the design of host interfaces for ATM local-area networks will be discussed. Analytical and experimental evaluations show that, ATM adapters can perform quite close to their designed limits provided that they are used in a properly configured environment with series resources capable of sustaining the desired throughputs. While the media speed may be 155 Mbits/sec, there are a number of factors that will determine the final achieved maximum throughput observed by a user of an ATM adapter. Some are the overhead inherent to ATM, such as the 5 byte ATM header that accompanies every 48 bytes of data sent. Others are inherent in the protocol used in the communication layers above ATM. Still others are dependent upon the processor speed and the operating system used by the adapter host system. In particular, it can be concluded that a simple host interface, which leaves most of the ATM protocol processing to be done by the host computer, supports performance for data communication around 100 Mbits/sec. However, to support higher bandwidth communication, the ATM interface should include an embedded processor. Active messaging is a communication model designed around the interaction of a network interface and its driving software in an operating system. By utilizing this model, the user can design applications that make better use of the available computing and communication resources. Currently, successful implementations exist only for a certain subset of workstations and network adaptors. Active Messages is a mechanism that allows efficient overlapping of communication with computation in multiprocessors. Communication using Active Messages is in the form of requests and matching replies. An Active Message contains the address of a handler that gets called upon receipt of the message followed by upto four words of arguments. The function of the handler is to pull the message out of the network and integrate it into the ongoing computation. A request message handler may of may not send a reply message. However, in order to prevent live-lock, a reply message handler cannot send another reply. Shared memory provides programmers with a simple model for programming parallel computers by providing a single address space. In the shared memory model, processors communicate via reads and writes to the shared address space, facilitating the usage of pointer-based data structures required for many complex algorithms. Distributed Shared Memory (DSM) is a software abstraction providing a single shared address space to processors with disjoint memories. The idea of DSM over a network of workstations has been around for quite some time. However, due to poor performance, it may not be attractive as a practical solution. The emergence of high performance ATM networks may finally alleviate this problem and make a DSM a practical reality.
131
The thesis describes Quarks, a DSM based parallel programming environment in UNIX, which is developed to provide a tool that can easily be used for building high performance computing applications. There is a large amount of literature on using idle cycles in a network of workstations for sequential load sharing, as well as executing parallel programs on a dedicated ne.twork of workstations. Building upon earlier work, the thesis examined the feasibility of combining sequential and parallel jobs on a single platform. Traces at the University of California {Berkeley), showed that there were enough idle cycles present in their cluster to effectively support both sequential and parallel workloads given a reasonable recruitment threshold and an efficient implementation of process migration. The success of the proposed system hinges upon maintaining the response time of the interactive sequential users of the cluster. Thus, parallel jobs were only run on otherwise inactive machines. However, because of secondary memory effects, moving a parallel process away from a workstation can potentially be quite costly to interactive users upon their return. A social contract can be used to minimize the number of interruptions to any one sequential user while still maintaining parallel throughput. At the Cornell University {Ithaca, NY) a research project is run, which is called: U-Net, a user-level network interface for parallel and distributed computing. The U-Net communication architecture provides processes with a virtual view of a network interface to enable userlevel access to highspeed communication devices. The architecture implemented on standard workstations using offthe-shelf ATM communication hardware, removes the kernel from the communication path, while still ·providing full protection. This allows a much tighter integration of computation and communication with the effect that communication overheads are reduced dramatically. The model presented by U-Net allows for the construction of protocols at user level whose performance is only limited by the capabilities of the network. The architecture is extremely flexible in the sense that traditional protocols like TCP and UDP, as well as novel abstractions like Active Messages can be implemented efficiently.
132
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
A. Wijffels Rapport nr. EB 61 0 25 april 1996 Design of an embedded microprocessor for array intensive tasks prof.ir. M.P.J. Stevens dr.ir. A.C. Verschueren ir. L.C. Benschop
Summary For his PhD thesis, ir. L.C. Benschop has defined a VLSI circuit that can compress and decompress data in hardware at a speed of 100 Megabit per second. The functionality of the circuit has been analysed and for all functions the optimum implementation technique has been selected. The resulting architecture consists of several part~ including an embedded microprocessor. The graduation report explains the design of this microprocessor. The processor to be designed must perform the intermediate rate, high complexity tasks of a lossless data compressor and decompressor. More specifically it must generate the optimum Huffman code from statistics of the data. Further it must encode and decode descriptions of these Huffman codes. These descriptions are included with the compressed data. Finally it must program the code into the Huffman encoding and decoding hardware. These algorithms use simple operations {like addition) and no multiplication or division. They do frequently access arrays in a random order. Therefore array indexing must be very efficient. Speed requirements are high for this embedded processor (target is 50 MHz). The proposed processor has a Harvard architecture. This means that the program memory and the data memory and the data memory are separate. Instructions and data can be accessed at the same time. The program memory is a ROM and the data memory is a RAM. A small part of the data address space will be mapped to 1/0 devices. The step by step design path is shown according to the method as shown by Patterson and Hennessy in their book "Computer organisation and design, the hardware/software interface". First the datapaths are defined necessary to execute the several addressing modes of the binary operations. A stack pointer and the datapaths for the (conditional) branch instructions are added and finally a datapath is added to implement the interrupt feature. To increase the throughput of the microprocessor pipelines are implemented. As the pipeline became too long the design specification was changed and the microprocessor was given a loadstore architecture. To decrease data hazards data-forwarding has been added. The final design is modelled and simulated with IDaSS, an interactive design and simulation environment for synchronous digital circuits. To know whether the timing requirements are achieved the design can be translated to VHDL. With a silicon compiler the design can then be implemented in silicon and a more accurate timing analysis can be done.
133
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
J. Moerkerken Rapport nr. EB 609 29 augustus 1996 Distributed Resource Control for B-ISON Release 2/3 prof.ir. J. de Stigter ir. J.A. Peek (Lucent Technologies voorheen AT&T NS NL Huizen)
Summary In the RACE project R2044, also known as the MAGIC project, an attempt was made to describe the signaling protocols for B-ISON Release 213. In the applied specification methodology, a functional model for the network has been defined. This model consists of three functional layers call (CC), resource (RC) and bearer control (BC) - which together are responsible for all actions necessary to set up, maintain and release a call. The modeling of resource control, as presented in MAGIC, is not finished. Main target of the thesis is to complete the modeling for resource control using the results of the MAGIC project as a starting point. These results are used as a guideline in the modeling of resource control, and modified if they induced limitations upon the modeling. Resource control is responsible for allocating the resources, necessary for a service request, received from call control. In this thesis a framework is presented for resource control. In the framework algorithms perform the actual translation of a service request to physical resources. The quality of the translation is defined by three criteria: the translation time, the costs of the resources used and the costs of the connections made. For all three criteria a threshold value is introduced. The threshold values define which translations are acceptable, considering the criteria. The framework consists of the following elements: 1. For the translation of the service request to physical resources, RC entities (independent functional elements which can perform various functions on various controllable objects) require a certain knowledge of the resources, the user-locations and the layout of the network. For the modeling of this knowledge a network model is necessary. The network {domain) presented in the thesis is a collection of sub-networks {local domains). The local domains consist of nodes and endnodes. Local domains are connected to other local domains through interconnection trunks between the endnodes. The modeling of the knowledge of the RC entities can be solved using distributed platforms. In the thesis a model is presented as a reference model. In this model, each RC entity has knowledge of the local domain it belongs to, and the RC entities at the endnodes have the ability to obtain knowledge on other parts of the domain through a database system. The model of the network, as well .as the model of the distributed RC knowledge, can be expanded for multi-level networks. 2. Interfaces have been enhanced for the communication between the functional entities in the network. The interface between call control and resource control is a black box description of the service, with the input/output relations of all parties involved. The format of the communication between peer RC entities depends on the knowledge of both RC entities. The interfaces have been updated to include the thresholds for the three criteria. Modifications of services are possible without interruption of the signal flow to users. 3. Depending on the knowledge of an RC entity and the format of the received service request, the action flow diagrams for three different functional RC entities can be described. One is for the RC entity receiving the service request from call control, one for the RC entities at endnodes and one for the other RC entities. The action flow diagram of an RC entity with a larger knowledge, receiving a service request with more freedom in the translation, encompasses the action flow diagram of an RC entity with less knowledge, receiving a service request with less freedom in the translation.
134
The framework and algorithms, performing the actual translation of a service request to resources, completely describe resource control. The strength of the framework is that the actual functionality of resource control is described by the algorithms, whereas the framework only supplies an environment in which the algorithms become operable. The definition of the optimal functionality of resource control is left open to the network operator, through implementation of the criteria in the algorithms. Better translation methods (algorithms) which may become available can be implemented in the developed framework. The model is ready to be implemented and tested. The implementation model should allow for modifications and enhancements to the algorithms and criteria. The implemented network model should allow for expansion of the model and freedom (for the network operator) in the implementation of the distributed knowledge of RC entities.
135
-l Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
H. Nordkamp Rapport nr. EB 627 29 augustus 1996 DECT services - lnventarisation and creation of DECT mobility and value added services on an Intelligent Network platform prof.ir. J. de Stigter ir. P.M.A.M. Heijnen (KNP Research) ir. A. Lensink
Summary The graduation report is written within the framework of a graduation project. Goal of the assignment was to get a clear overview of developments in DECT. Furthermore, it was desired to extend the existing services in the SC-Iab with services proposed for mobile users. The overview and these new services are described in the report. At the moment, DECT is able to offer coverage within .one limited area. However, there are developments to take away the limitation of using the handset in only one area. An obvious solution is to connect DECT systems to a GSM network. The GSM network takes care of the mobility functions necessary to support roaming between the DECT systems. Another solution is to add mobility functionality to the fixed network. Work is progressing within ETSI on a standard that described procedures to support mobility, called Cordless Terminal Mobility (CTM). CTM can fulfil the need for high quality mobile communication restricted to certain areas. Dual-mode DECT/GSM could profit from the advantages of both DECT, offering high quality, and GSM, offering large area coverage. A service is needed that presents these two access methods and networks as one to the user. These developments are described considering the question how roaming can be supported in and between the home, business and public environment. DECT can also be used as cordless access to ISDN and data networks. Possibilities to do this are described shortly. Due to the network independence of DECT, it could be a solution to offer communication for tele-workers. The two services Virtual Private Network and Group numbering are purposed as adding potential value for users of mobile terminals. Group numbering fills the lack that originates when replacing one single phone for the family by several handsets for every member. VPN offers the service abbreviate dialling for members within one family. The services and their implementation have been described in the graduation report.
136
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
M.H.J. Schoenmakers Rapport nr. EB 631 17 oktober 1996 Development of a generic model for handover in UMTS prof.ir. J. de Stigter dr. M.C. de Lignie (KPN Research)
Summary An important functionality, concerning the third generation mobile telecommunications system UMTS (Universal Mobile Telecommunications System), is the handover functionality. To implement this complex functionality efficiently, a model is needed that can handle all possible handover functions (scenarios). The development of such a generic model was the goal that had to be achieved for the graduation project. From this main goal, four objectives were identified that had to be met: • • • •
Analyse the common features of the different handover scenarios; Identify and describe the protocols between UMTS network nodes; Study and verify the feasibility of a generic model; Describe the coverage of the Handover Generic Model.
The analysis of the common features points out that the handover process in UMTS depends on several aspects. These aspects are the handover initiation points, handover initiation procedures, bearer types, the UMTS reference configurations and environments, the handover cases, the handover types, and, finally, the radio interfaces with which the network has been equipped with. Each handover aspect identifies the options that are possible in the particular part of the handover process, during which the aspect is relevant. Because of this, each handover scenario can be described by a combination of options of these handover aspects. The functionality that is required during a handover is described, using the Handover Functional Model, developed by the MONET (MObile, NETworks) project. This model identifies the phases during each handover process and structures the required functionality into Functional Entities. The handover aspects and their options are used to describe the allocation of the Functional Entities onto the network nodes, the Functional Groups of an example UMTS environment. The possible signalling relations, the protocols, between the Functional Entities and hence between the Functional Groups are described. The example environment is used to describe the Information Flows that are needed between the Functional Entities. The Information Flows describe the needed interaction between Functional Entities as to support their joint operation. A top-level structure of the handover Information Flows is developed that is suitable for all handover scenarios. The feasibility of a generic model for the handover functionality in UMTS is studied by formulating SOL (Specification and Description Language) specifications of the Functional Groups of the example environment. The feasibility of such a model is verified by the performance of simulations of the possible handovers, using the simulator in SOT (SOL Design Tool). Finally, the coverage of the developed Handover Generic Model is described.
137
LEERSTOEL AUTOMATISCH SYSTEEM ONTWERPEN
139
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
E. van de Braak 28 augustus 1996 The evaluation of the Architecture Synthesis Tool Phideo. prof.dr.ing. J.A.G. Jess ir. J.A.A.M. v.d Hurk ir. P.H. Frencken
Summary This report describes the evaluation of the architecture synthesis tool Phideo. Phideo is developed by Philips and is used to design high throughput Digital Signal Processing applications. Typical properties of these applications are repetitions, dedicated hardware which is often pipelined and large communication requirements. The synthesis and allocation of distributed memories play an important role in Phideo. Phideo design method is based on the analysis of the manual design process and the different design decision that are taken. Phideo is not a fully automated push-button system, but important design decisions are left to the designer. The input of Phideo is written at high level in Phideo Input Language (PIF). Phideo generates a design and provides the necessary feedback to evaluate and improve the design iteratively by defining constraints which drive the scheduling process and the memory synthesis. The final output of Phideo is a synthesizable Register Transfer Level VHDL description. As a test-case for Phideo a part of an MPEG2 decoder is used. The research goal and conclusions are based on design time, quality of the design and applicability in a product development environment.
140
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
R. v.d. Knijff 25 april 1996 Retiming of synchronous static CMOS circuits for Low Power prof.dr.ir. J.A.G. Jess dr.ir. J.F.M. Theeuwen
Summary A method for reduction of redundant switching activity (glitches) in synchronous static CMOS circuits is described. Redundant switching activity is analyzed with a logic simulator extended with a power stimation tool, by comparing two simulation runs: with a realistic delay model nd with a zero delay model. This information is used to calculate for each circuit node the amount of switched capacitance caused by redundant switches at that particular node. Flipflops are inserted at nodes that cause a lot of capacitance to switch redundantly. Experiments have shown that the power consumed by the extra flip-flops nearly always supersedes the reduction in redundant power dissipation. Reduction can only be reached in highly active circuits dissipating a lot of redundant power. For these circuits, retiming with strong delay constraints seems however to be more efficient.
141
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
Nulens M.P.M.L. 15 februari 1996 Optimization of Data Path Bit-slice Placement prof.dr.ing. J.A.G. Jess dr.ir. R.X.T. Nijssen
Summary Due to recent advances in VLSI and ASIC designs technology, the size of chip designs is increasing, resulting in a larger chip area and a larger number of gates and nets per unit chip-area. The design specifications have become more strict, which makes it more difficult to generate layouts for these VLSI designs. This scale enlargement of the chips and the stricter design specifications make it necessary to develop new placement tools capable of chip area minimization without violating timing and layout constraints. A commonly used layout method for VLSI designs is the row-based standard-cell placement, in which the circuits data path is divided into bit-slices that are placed in rows. In this thesis we describe the linear placement of data path bit-slices containing net-delay constraints and terminal constraints. The objective of the placement is to minimize the maximum net-density while not violating any net-delay constraints or terminal constraints. After the modeling of the placement problem, a brief overview of different placement methods is given. Then two placements methods are worked out and finally the resulting linear placement tool is described. The first proposed placement method is a force directed placement method. This placement method wasn't capable of doing correct net-delay constraint placement on critical paths. Then a best first search method is used with success for the linear placement of bit-slices with net-delay constraints and terminal constraints. The resulting placement tool called HOPPER uses this best first search method to find a feasible placement. Several modification to the plain best first search algorithm are used to improve the quality of the placement, reduce the run-time, or limit the amount of used memory. .
142
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
H.A.J.M. Seelen 12 december 1996 Automatic Synthesis of gated clocks for low power. prof.dr.ing. J.A.G. Jess dr.ir. J.F.M. Theeuwen
Summary A method is presented that automatically synthesizes gated clocks in synchronous static CMOS circuits to reduce power dissipation. This synthesis is performed on the gate level description of the circuit. First of all, all the flip-flops with a feedback loop are marked as candidates for a gated clock implementation of the clock signal. When the feedback loop is active the flip-flop is said to be in hold mode. Secondly, for all the candidates the condition that activates its feedback loop is determined. Thirdly, the candidates with coinciding conditions for activating the feedback loops are grouped into hold domains. These hold domains are implemented using for each hold domain one gated clock circuit. The gated clock signal may only be inactive if all the hold domain have an active feedback loop. This condition is also implemented. For its implementation the condition is symplified as much as possible by using local nets in the circuit. Nevertheless, if the implementation of the condition using local nets costs to much area an alternative solution is given by using a straightforward implementation of the condition by exclusive oring the input and the output of the flip-flop. Problems concerning the verification and testability of the gated clock circuits are also discussed and solutions are presented. The automatic synthesis of gated clocks and the solutions for the testability of the gated clock circuit are implemented in the tool Hold. This tool is tested on two designs, the 8 bit microcontroller 80c51 and the 16 bit digital signal processor rd16020. The implementations of the gated clock circuit results in an area increase of less than 10%. Power measurements on the 80c51 show a reduction of 24% in power dissipation. On the DSP an average saving of 27% is realized.
143
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
Vaassen J.H.M. 15 februari 1996 Implementing CTL Model Checking within the BSN framework. prof.dr.ing. J.A.G. Jess ir. G.L.J.M. Janssen ir. C.A.J. van Eijk
Summary Design of electronic circuits is still a growing business. Because the complexity of circuits grows, there is need for better verification. There are several methods to verify a circuits actions. One of these methods is symbolic model checking. Finite State Machines are for example used for controllers. As they grow bigger, the number of states grows very fast. Therefore it is not possible to enumerate states and sets of states. To be able to check large FSMs, Binary Decision Diagrams are used to represent these sets of states. These sets are represented by their characteristic function. This way, the state explosion problem can be avoided. For model checking, a large number of states can be lethal, because sets or even BODs repesenting these sets get to big. To avoid this, a reachability analysis is done first to reduce the number of states. From an initial state, all reachable states are calculated. The set of reachable states is then used to do model checking. Symbolic Model Checking is used to check behaviour for Finite State Machines. It is a technique that uses a hardware language (BSN used by IBM) to describe the operation of a circuit and a logic (in this case CTL) to describe the desired properties. An implementation was already written by McMillan for his SMV-system, but for the BSN-Ianguage a similar program should be implemented. The goal of this project was to build a model checker for BSN. This goal was not completely reached. The reachability algorithm was implemented, and all CTL-formulas to. Only a parser to process CTL-inputfiles has not yet been implemented. Some tests were done to check if the algorithms worked. The results of those tests were good, the algorithms functioned properly.
144
VAKGROEP ELEKTRISCHE ENERGIETECHNIEK
145
LEERSTOEL ELEKTRISCHE ENERGIESYSTEMEN
147
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
W.H. van de Akker Rapport nr. EG-96-830 17 oktober 1996 The effects of Contact Material and Contact Geometry on High Current Interruption in Vacuum prof.ir. G.C. Damstra dr.ir. W.F.H. Merck
Samenvatting Vandaag de dag worden er steeds meer vacuOmvermogensschakelaars toegepast in middenspanningsnetten. Twee van de belangrijkste parameters voor het onderbrekingsgedrag van deze vacuOmschakelaars zijn het contact materiaal en de contact geometrie. Doel van dit afstudeerwerk was het effect van deze twee parameters op het onderbrekingsgedrag van een vacuOmschakelaar te onderzoeken teneinde een classificatie van de onderzochte contacten te verkrijgen. Criteria voor de classificatie zijn: het aantal herontstekingen, de herontsteekspanningen of de herontsteek veldsterkten en de slijtage van de contacten. Het onderbrekingsgedrag van deze contacten wordt .onderzocht met behulp van een Weii-Dobke synthetisch circuit en een snelcamera die ontwikkeld is bij de KEMA. De CuCr "butt-type" contacten vertonen een prima onderbrekingsgedrag voor boogenergieen lager dan ongeveer 1 kJ. Voor grotere boogenergieen krijgt het effect van het axiaal magnetische veld, gegeneerd door de "coil-type" contacten, de overhand en verslechtert het onderbrekingsgedrag van de "butt-type" contacten snel in vergelijking met het onderbrekingsgedrag van de "coil-type" contacten: De "coil-type" contacten waren uitgerust met de contact materialen: Cu, CuCr, CuW en AgWC. Van deze materialen is AgWC het meest geschikt voor het gebruik in zogenaamde "low surge breakers" en is CuCr het meest geschikt voor het onderbreken van kortsluitstromen. Cu heeft als grote nadeel dat er erg veel slijtage is aan het anode contact en is daarom niet erg geschikt als contact materiaal voor vacuOmschakelaars. CuW is een veel belovend contact materiaal voor het onderbreken van kortsluitstromen maar er moet nog meer onderzoek naar gedaan worden. Het blijkt dat de stroomconcentratie en de verhitting van het anode contact een grote invloed hebben op het onderbrekingsgedrag van de contacten. Er is daarom begonnen met de ontwikkeling van een "finite element method" model dat de stroomconcentratie en de verhitting van de anode berekent. Het stroomconcentratie model voor "butt-type" contacten geeft resultaten die het bestaan van stroomconcentratie bevestigen.
148
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
R.J.B. Gruntjes Rapport nr. EG-96-838 12 december 1996 Bepaling van de vacuumkwaliteit in vacuumschakelaars prof.ir. G.C. Damstra dr.ir. W.F.H. Merck
Samenvatting Vacuumschakelaars worden steeds vaker toegepast in middenspanningsinstallaties. Tot op dit moment wordt de druk in de vacuumbuis nagenoeg aileen gecontroleerd na afloop van het produktieproces met de Penning- of de magnetronmethode. Deze methoden kunnen echter niet meer toegepast worden, nadat de vacuumbuizen zijn ingebouwd in een schakelinstallatie. Ondanks de gegarandeerde levensduur van 25 jaar is het steeds meer gewenst om reeds na enkele jaren te controleren of de druk in de vacuumschakelaar zich nog onder de drempelwaarde {1 o·4 mbar) bevindt. Er dient derhalve een meetmethode ontwikkeld te worden, waarmee aan de hand van slechts enkele elektrische parameters de vacuumkwaliteit van ingebouwde vacuumschakelaars kan worden beoordeeld. In dit onderzoek zijn enkele methoden onderzocht, die mogelijk geschikt zijn voor drukmeting "in het veld". Sommige methoden worden reeds in de praktijk toegepast. Allereerst zijn de Penning- en magnetronmethode onderzocht, waarbij tevens is gekeken naar de onderlinge verschillen. Hieruit blijkt, dat de Penningmethode iets gunstigere eigenschappen heeft. De methode, waarbij de DCschermspanning wordt gemeten, blijkt niet geschikt te zijn om de druk te bepalen. Verder is de drukafhankelijkheid van de boogspanning van de gelijkstroomboog getest. Zowel met de gemiddelde boogspanning als met de omhullende van het histogram van de boogspanning kan onderscheid gemaakt worden tussen het drukgebied, waarin de schakelaar goed zal functioneren en het drukgebied, waarin de schakelaar zijn stroomonderbrekend vermogen verliest. De verhouding tussen de maximale wederkerende spanning en de booglevensduur neemt at indien de druk groter wordt dan 1o·3 mbar. De type-test, waarmee de doorslagvastheid wordt bepaald, blijkt niet betrouwbaar te zijn voor bepaling van druk, omdat er pas bij 2.1 o·2 mbar doors lag optreedt. Seide methoden zijn dus ongeschikt als maat voor de druk. Tot slot Ievert het aantal herontstekingen na onderbreking van een kleine {rnA's) resistieve stroom geen significante verschillen op in het onderzochte drukgebied.
149
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
W.H.A. Sluijpers Rapport nr. EG-96-81 0 25 april 1996 Gevoeligheid van beveiligingen; verzadiging van van stroomtransformatoren prof.ir. G.C. Damstra ir. J.G.J. Sloot
Samenvatting In relatief eenvoudige stralennetten worden overstroombeveiligingen gebruikt, waarbij de grootte van de stroom voortdurend bewaakt wordt. Een dergelijke beveiliging geeft een uitschakelbevel bij overschrijding van een ingestelde waarde na een ingestelde tijd. In het algemeen bestaat een dergelijke beveiliging uit een stroomtransformator van waaruit een relais wordt aangestuurd. Het relais geeft een uitschakelbevel aan een vermogensschakelaar aan die de stroom in het net onderbreekt. Ter bescherming van mens en materiaal behoort de overstroom snel te worden afgeschakeld. Bij middenspanningsnetten van 10 kV en een kortsluitvermogen van 350 MVA kan in de directe omgeving van een onderstation, een kortsluitstroom optreden van enkele tientallen kilo Amperes. In normale bedrijfstoestand voert het net de zogenaamde nominale stroom (IN). Het relais behoort nu niet te reageren. Voor grotere kortsluitstromen {>SIN) zijn de meeste relais voorzien van een zogenaamde direct trip, deze geeft binnen 40 ms een uitschakel bevel aan de vermogensschakelaar. De direct trip blijkt in praktijk vrijwel nooit te zijn ingesteld. Men maakt over het algemeen aileen gebuik van de vertraagde trap. Deze staat meestal ingesteld op een stroom van 1 1,4 keer IN en een tijd van een seconde. Van de direct trip wordt veelal geen gebruik gemaakt. Als reden hiervoor wordt het onterecht activeren van de direct trip bij het inschakelen van een streng distributietransformatoren genoemd.
a
Van de voorkomende stromen {de zgn. inrush stromen) kunnen de eerste pieken 24xiN bedragen. Een inrush stroom heeft echter een grote gelijkstroomcomponent. Als gevolg hiervan zal de stroomtransformatorkern in verzadiging gaan, zodat het relais een sterk vervormd signaal ontvangt van de stroom in het net. Er zijn een aantal metingen uitgevoerd om te bestuderen of een mechanisch maximum-stroomtijdrelais hier nog op reageert. Hiervoor is een stroomtransformator ontwikkeld met een in praktijk voorkomend overstroom getal. Uit de gedane metingen blijkt dat het relais voor deze configuratie wei reageert voor de voorkomende overstromen bij kortsluingen, maar niet reageert op gecreeerde inrush stromen. Tevens is er een programma ontwikkeld met Matlab dat het secundaire stroomverloop van een stroomtransformator met belasting naar behoren simuleert. Uit de metingen blijkt dat de direct trip gebruikt kan worden in de laagste liggende uitloper van een stralennet voor de gebruikte configuratie van de overstroombeveiliging. Bij toepassing van andere typen stroomtransformatoren en belastingen behoort men een idee te hebben van de voorkomende vervorming bij inrush stromen. Als gereedschap hiervoor is een nieuw gedefinieerde samengestelde transformatie fout als functie van de tijd voorgesteld.
150
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
A.A.Th. van Mullekom Rapport nr. EG-96-778 25 april 1996 Veiligheid van de Modular Helium Reactor Is de MHR levensvatbaar? prof.dr. L.H.Th. Rietjens dr.ir. E.M. van Veldhuizen dr.ir. A.l. van Heek (ECN)
Samenvatting De modulaire helium reactor (MHR), een hoge-temperatuur helium-gekoelde kernreactor, kent een zeer hoge veiligheid en wordt ook wei inherent veilig geno~md. Bedrijven in de nucleaire sector hebben belangstelling voor zo'n type reactor, hetgeen kan r~ulteren in een markt-introductie. Aan twee vereisten moet voor zo'n introductie worden voldaan, te weten: de reactor moet inderdaad inherent veilig zijn, en tegelijkertijd moet de reactor concurrerer met andere energiecentrales. De veiligheid van de reactor is met de thermohydraulische THATCH rekencode van het Amerikaanse Brookhaven National Laboratory geanalyseerd. Een doels~elling is het opbouwen van expertise met deze, voor het ECN nieuwe, rekencode. Door over&acht van ervaring, kennis, files en resultaten afkomstig van de THATCH code kan gesteld worden dat aan deze doelstelling in ruim voldoende mate voldaan is. 1
Door vermogensvergroting kan de concurrentiepositie van de reactor worden verbeterd. Vanuit dit gegeven is bestudeerd of het vermogen van de reactor kan )NOrden verhoogd zonder de gunstige veiligheidskenmerken nadelig te be'invloeden. Dit is de centrale probleemstelling in dit verslag. Veiligheidseisen leggen evenals de voorwaarden voor een gu~stige economie, randvoorwaarden op aan het modulair te bouwen antwerp. Uitgegaan is van de goad beschreven reactormodule van 350 MWth met een stoomcyclus. De vermogensopschaling naar 450 MWth per module is bestudeerd. Verbetering van het rendement en conceptuele eenvoud worden bereikt door het toepassen van een gasturbine met een hoge inlaattemperatuur. De belangrijkste ontwerpparameters zijn voor dit antwerp binren de betreffende randvoorwaarden geoptimaliseerd met betrekking tot een maximaal thermisch retldement. Uit de berekeningen met de thermohydraulische code volgt dat bij het uitvallen van de geforceerde koeling voor het behouden van inherente veiligheid een vermogensdichtheid van maxima41 ongeveer 6 MW per m3 kernvolume toegestaan mag worden. De temperatuurlimiet voor de regelstaven blijkt in het antwerp het meest kritisch te zijn. De warmteoverdracht uit de centrale reflector is als zeer slecht te kwalificeren. I
De modulaire helium reactor, gebruikt voor elektriciteitsopwekking, kan pas concurreren met moderne kolencentrales wanneer de prijzen van aardgas en kolen tenminste verdubbelen. Enerzijds Ievert verlaging van de investeringskosten een verbetering op van de concurrentiepositie van de modulair te bouwen centrale. Het weglaten van het Shutdown Cooling System is een optie die hiertoe bestudeerd kan worden. Omdat het antwerp reeds aonceptueel zeer eenvoudig is, biedt vermogensopschaling anderzijds betere perspectieven voor lagere elektriciteitsproduktiekosten. Een holle cilinder door het hart van de centrale reflector kan overwogen worden. Door de circulatiestroming in zo'n cilinder verbetert de warmteoverdracht uit de c'ntrale reflector waardoor de reactor mogelijk ook bij grotere thermische vermogens nagenoeg inherent veilig kan blijven. Met betrekking tot de veiligheid an de ractor mag gesteld •worden dat de reactor zeer zeker levensvatbaar is omdat reeds inherente veiligheid van de rkactor nagenoeg bereikt wordt. De reactor is op economisch gebied echter nog niet levensvatbaar omdat moderne gas- en kolencentrales goedkoper elektriciteit kunnen produceren.
151
-------------------------------------------------------------------------------------
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
A.P.C. Vis Rapport nr. EG-96-821 29 augustus 1996 Toepassing van RisPro voor modellering van digitale beveiligingen prof.dr-ing. H. Rijanto ir. W.F.J. Kersten
Samenvatting Moderne beveiligingssystemen in elektriciteitsnetten zijn gebaseerd op microprocessoren. Sinds kort beschikt de vakgroep Elektrische Energietechniek over zo'n multifunctioneel systeem, namelijk een programmeerbaar digitaal relais. De functionaliteit hiervan wordt bepaald door de software en is door de gebruiker grafisch te ontwerpen met behulp van het softwarepakket Risprotools. Een beveiligingsfunctie is in Risprotools hierarchisch en modulair opgebouwd zodat men tijdens het ontwerpen het overzicht blijft behouden. Bij het ontwerpen kan men logische functies grafisch met elkaar verbinden en zo een functieblok samenstellen, welke weer met een ander functieblok grafisch kan worden verbonden. Op deze wijze kan er een relaisfunctie worden gecreeerd die naar het digitale relais kan worden overgebracht, welke dan de gewenste functionaliteit krijgt. Er is een analyse gemaakt van de mogelijkheden en beperkingen van het softwarepakket Risprotools en het digitale relais. Daartoe is een distantiebeveiliging ontworpen en getest. Het betreft hier een type dat gebruik maakt van polygone karakteristieken in het complexe vlak, voor de gemeten foutimpedantie. Er zijn vijf tripzones, waarvan de eerste zone een gedetecteerde sluiting het snelst dient af te schakelen. Een van de beperkingen van de huidige versie van Risprotools die aan Iicht is gekomen, is dat grote complexe beveiligingsfuncties niet te implementeren zijn. Om die redenen is in de ontworpen distantiebeveiliging geen wederinschakelingscyclus geprogrammeerd en zijn bepaalde functies zoals het weergeven van gemeten stromen en spanningen niet opgenomen. Om te beoordelen of de distantieveiligingen aan de gestelde eisen voldoet zijn er testen uitgevoerd met Omicron-apparatuur. Deze genereert analoge driefasenstromen en-spanningen welke aan het beveiligingsrelais zijn aangeboden. Door vervolgens een sluiting te Iaten plaatsvinden is het functioneren van de beveiliging onderzocht. De testresultaten tonen dat de ontworpen distantiebeveiliging redelijk tot goed aan de gestelde eisen voldoet.
152
··---
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
E.F. Wierenga Rapport nr. EG-96-822 29 augustus 1996 Testing of a distance protection system by using OMICRON and EMTP-simulations prof.dr-ing. H. Rijanto ir. W.F.J. Kersten
Summary To perform main and backup protection without communication connections mostly the distance protection is used. When a fault occurs on a transmission line, it is necessary to detect the location of the fault in order to trip circuit-breakers at each end of the faulted line section, and thus isolate that section from the power system. The fault location can be determined by measuring the impedance of the faulted conductors between the protection location and the fault. To guarantee the functionality of the distance portection it should be tested for different fault conditions. Therefore the statical and the dynamic behaviour of a prototype distance protection will be examined. During the statical test the voltages and currents are sinusoidal, but during the dynamic test the voltages and currents will have transients, caused for instance by the electrical arc. With the test appliance OMICRON it is possible to test the statical behaviour of several functions of the distance protection. To test the dynamic behaviour OMICRON will be making use of the simulation program Electro Magnetic Transient Program (EMTP}. For testing the statical and the dynamic behaviour of the distance protection the respond after a fault inception and determination of the fault location will be observed. The results are: Statical test: - The starting unit for detecting the system fault does not comply with the setting. - In case of a system fault to earth the measuring unit does not locate the fault properly. - Also if the impedance of the border of the main-protection zone is small the measuring unit is not able to determine the fault location properly. Dynamic test: - A short-circuit current with De-component has no influences on the behaviour of the distance protection. - During saturation of the current transformer or during an electrical arc at the fault location the distance protection locates the fault further on. With these results it can be concluded that for several system faults to earth and for a small impedance of the main-protection zone the distance protection is not able to determine the fault location correctly. Transients .caused by saturation of the current transformer and the electrical arc tends to determine the fault location further on. Since the test is executed for a proto-type distance protection the following points are recommendable: - Execute the entire test procedure for the latest hardware of the distance protection. An important point of attention is comparing the deviations above with test results of the latest hardware. - Extend the test by examining the auto reclosing of the distance protection .
•
153
LEERSTOEL HOOGSPANNINGSTECHNIEK & EMC
155
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
S.F.W.A. Litjens Rapport nr. EH.96.A.146 15 februari 1996 Inductive and Capacitive Position Detection of a 12A, 2MeV Electron Beam prof.dr.ir. P.C.T. van der Laan dr.ir. E.J.M. van Heesch
Summary This report describes the design and construction of two kinds of diagnostic systems to determine the position and beam current of a high energy electron beam at a fixed position along a beam line. The first diagnostic system makes use of an improved type of inductive sensor consisting of 120° sector wound coils. These coils are air cored with a constant winding density. They are wound on a section of a toroidal coil former. Four 120° coils have been combined onto one former and determine beam position in two perpendicular directions. To determine the beam current a Rogowski coil is wound on top of them. The coil signals are integrated. The integrated coil voltages have been related to beam position and beam current. The found relations have been confirmed experimentally. The system makes it possible to position the beam within a distance of 0.1 mm of the centre. The second diagnostic system makes use of capacitive sensors. Various shapes of electrodes have been investigated. The so called 120° electrodes are superior. These electrodes stretch out over an angle of 120° of a cylindrical tube of finite length. Four electrodes are required to determine beam position in two perpendicular directions. The output signals of the electrodes are integrated. The integrated voltages are related to beam position and beam charge. The found relations have been confirmed experimentally for a set of 180° electrodes. Of each system the specific advantages and disadvantages have been investigated.
156
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
J.P. Peters Rapport nr. EH.96.A.147 27juni 1996 Entwicklung von Vakuumschaltkammern mit Hilfe von im CAD integrierten Feldberechnungen prof.dr.ir. P.C.T. van der Laan dr.ir. J.M. Wetzer J.H.F.G. Lipperts (ACE) D. Gentsch (ACE)
Samenvatting ABB Calor-Emag Schaltanlagen AG (ACE), gevestigd in Ratingen, Duitsland, ontwikkelt en produceert vacuOmschakelbuizen. Deze buizen maken deel uit van vermogensschakelaars voor middenspanningstoepassingen. Om tot een bedrijfszeker ontwerp te komen wordt tijdens de ontwikkeling veelvuldig gebruik gemaakt van veldberekeningen. Hiertoe wordt het door ABB Corporate Research (Baden-Dattwill} ontwikkelde programma HSSSM4 gebruikt. Dit programma maakt gebruik van de surface-charge-simulation-method. Voor het konstruktieve ontwerp wordt gebruik gemaakt van een CAD-systeem. Er bestaat geen koppeling tussen het veldberekeningsprogramma en het CAD-programma, waardoor elke geometrie tweemaal moet worden ingevoerd. Dit leidt tot tijdverlies en verhoogt de kans op fouten. Doel van het afstudeerwerk was om beide computersystemen te integreren, en om het gei"ntegreerde pakket toe te passen op bestaande en nieuwe ontwerpen. ' Voor de koppeling van beide programma's is het CAD programma zodanig gemodificeerd dat de tekengegevens naar een outputfile worden geschreven, en voorzien van de relevante eigenschappen (geleider of isolator, dielektrische constante). Vervolgens wordt deze file geconverteerd naar het juiste formaat voor het veldberekeningsprogramma. Na voltooiing van de veldberekening wordt het resultaat geconverteerd tot input voor het CAD-programma, zodanig dat equipotentiaallijnen en veldsterktewaarden in de constructietekening kunnen worden afgebeeld. Het gei"ntegreerde programma is toegepast op drie schakelbuizen, de VG2 (een bestaande buis) en de VG4 en VG2-R (nieuwe ontwerpen voor 12 en 24 kV). Met name het laatste type wijkt sterk at van eerdere ontwerpen: waar andere buizen een cylindrische keramische omhulling hebben met een metalen boven- en onderdeksel, heeft de VG2-R een gekromde keramische omhulling die tot de toevoerverbindingen doorloopt. Voordelen van deze bouwvorm zijn 1) een toegenomen isolatieafstand bij gelijke afmetingen, 2} een geringer aantal onderdelen, en 3} eenvoudiger montage. Met behulp van de veldberekeningen worden tijdens ontwerp de kritische plaatsen herkenbaar. Met hoogspanningstests is geverifieerd dat deze kritische plaatsen in de praktijk inderdaad de "zwakste" plekken zijn. Hiertoe werden de buizen in de hoogspanningsbeproeving bij ACE met stoot- en wisselspanning getest totdat de buizen beschadigd raakten. Deze tests konden, door het niet tijdig voorhanden zijn van een prototype, niet worden uitgevoerd aan de VG2-R.
157
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
J.B.M. van Waes Rapport nr. EH.96.A.148 17 oktober 1996 EMC-analyse spanningsmeetsysteem KEMA/DZL prof.dr.ir. P.C.T. van der Laan dr. A.P.J. van Deursen
Samenvatting In het DeZoetenlab {DZL} van de KEMA te Arnhem meet men spanningen met gemengde delers. Van de delers in de schakeltuin is de hoogspanningsarm vast opgesteld. In de commandoruimte {CR} bevindt zich de laagspanningsimpedantie. De afstand tussen beide is 60m. Ten gevolge van de grote en snelle veranderende stromen en spanningen bij de tests, is de testopstelling zeit tevens een grote stoorbron voor het meetsysteem. In dit verslag analyseren we het meetsysteem a} naar zijn transferimpedantie ~en b) naar de capacitieve inkoppeling direct op de hoogspanningsarm van de deler. Seide effecten zijn onafhankelijk van elkaar. Een spanning is aileen eenduidig gedefinieerd tussen twee punten bij een gegeven pad. De hoogspanningsarm verbinden we met het gewenste meetpunt. De mantel van de signaalkabel verbinden we bij voorkeur met de lokale aarde direct onder het meetpunt. Om velerlei redenen moet de mantel van de signaalkabel ook in de CR geaard worden. T.g.v. het tweezijdig aarden kan er een forse stoorstroom door de mantel !open. De storing op het meetsignaal beperken we echter door ~ laag te maken. De common mode stroom lcm door de kabelmantels kan op veel manieren gereduceerd worden, b.v. met een goot; deze maatregelen Iaten we hier buiten beschouwing. Het huidige DZL-opstelling gebruikt een gebalanceerd meetsysteem met twee kabels. In· de schakeltuin bevindt zich een 1:100 voordeler, en een 0,22 J.I.F condensator naar aarde voor hoogfrequent 'aarding'. Deze opstelling is een aanmerkelijke verbetering t.o.v. de eerdere opstelling. De ~ is laag als gevolg van de balancering. Wij stellen een EHC-configuratie voor met maar een kabel. De koude kant van de HSP-arm wordt zonder voordeler met de signaalader van de kabel verbonden. De mantel aarden we in de schakeltuin onder die HSP-arm. De ~ is laag doordat de kabel aan de ingang aileen de hoogohmige HSP-arm ziet. In Bijlage 5 beschrijven we een laagspanningsarm met goede hoogfrequent eigenschappen en weinig afregeling. Met beide systemen kunnen we de mantel van de meetkabel tweezijdig aarden, zoals vereist. De kantelpunten in 1~1 voor de DZL en de EHC aanpak, hangen at van de circuitelementen en van de lengte ~ van de kabels. Zo zijn er frequentiegebieden waar de 1~1 evenredig is met i of met t Echter altijd blijkt de 1~1 van de EHC-oplossing gelijk of gunstiger dan die van DZL. Voor be ide configuraties {met ~ ca. 60 m) induceert een Iem van 1OA bij 50 Hz een stoorspanning van 1 V, terug gerekend naar de hoogspanningszijde. Dergelijke stromen zijn inderdaad waargenomen. Gebruikelijke hoogspanningen liggen tussen enkele en vele honderden kV. Experimenten tonen aan dat nu de capacitieve inkoppeling op de HSP-arm van de deler dominant is, in gelijke mate voor beide meetsystemen. Naburige circuits induceren een storing van enkele procenten van hun eigen spanningen. Afschermen bleek effectief.
158
BUITEN DE FACULTEIT ELEKTROTECHNIEK
159
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar: Begeleiding:
R.J.P. Rutten 17 oktober 1996 Image enhancement in low-vision: aspects of deblurring dr.ir. F.J.J. Blommaert (IPO) dr.ir. P.J.M. Cluitmans (MBS) drs. A.J. Roelofs (IPO)
Summary Several aspects of image enhancement in low-vision are discussed. In the first part of the study we take a look at the effects of eye-disorders like cataract and Age-Related Macula Degeneration (ARMD) on visual acuity, contrast sensitivity and recognition times. From the experimental results we can conclude that observers with ARMD have a lower contrast sensitivity than those with cataract. Contrast enhancement will therefore probably give good results for observers with ARMD. Also a recognition-time model is proposed, wich enables the specification of guidelines for an optimal presentation of text for low-vision observers. When no image enhancement techniques are used, text can best be presented to low-vision observers in high contrast and in a character size of about 75 arcminutes (7.2 mm on a reading distance of 33 em). In the second part of the study we investigated the applicability of deblurring for text and image enhancement. Since deblurring has a negative effect on the overall contrast value of the image, we also took a look at the interaction between deblurring and contrast and how these variables affect pseudo-reading rate and image appreciation. From the experiments we can conclude that some amount of deblur used to enhance low-contrast text and images improves acuity value and appreciation in comparison with the unprocessed image. In case of high contrast text the contrast reduction caused by deblurring is such that most observers prefer high contrast over deblur. In the case of complex images some amount of deblurring is almost always preferred over the original image. This amount of deblur does not only depend on the blur in the observers visual system, but also on the amount of important small details in the image. In conclusion we can say that with the actual state of technology high contrast text enhancement by means of deblurring is not possible. In case of complex images however, deblurring can give quite good results.
160
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Afstudeerhoogleraar: Begeleiding:
R. Teunen 25 april1996 Automatic voiced/unvoiced/silence labelling of significant excitation in speech signals prof.dr. R.P.G. Collier (IPO) dr.ir. R.N.J. Veldhuis dr.ir. A.C. den Brinker (ESP) prof.dr. B. Yegnanarayana (liT)
Summary Typically in speech analysis and recognition, segments of fixed duration are analysed and labelled as belonging to one of several categories. In this report it is proposed that instants of significant activity are identified first from the speech signal, and then labelled into the designated categories (voiced, unvoiced and silence). Recently a method has been proposed for identification of instants of significant excitation of the vocal tract system. These instants correspond to the instants of glottal closure in voiced speech and to the instants of onset of events such as bursts in other, nonvoiced, situations. In many nonvoiced segments such as unvoiced and silence cases, the instants occur at random times. Classification of the instants is based on measurements extracted from the speech signal immediately following the instants, and is carried out by a classifier consisting of three neural networks. Knowledge of locations and labels of the instants of significant excitation will enable us to process speech signals effectively, in applications such as prosodic manipulation and for initial stages of speech recognition.
161
SAMENVATTINGEN AFSTUDEERVERSLAGEN FACULTEIT ELEKTROTECHNIEK
1997
De Technische Universiteit Eindhoven aanvaardt geen aansprakelijkheid voor de inhoud van de in deze bundel opgenomen samenvattingen van afstudeerverslagen.
INHOUD
VAKGROEP TELECOMMUNICATIE TECHNOLOGIE & ELEKTROMAGNETISME ......... 5 Leerstoel Leerstoel Leerstoel Leerstoel Leerstoel
Radio-Communicatie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Halfgeleiderbouwstenen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elektro-optische systemen ................................... lnformatie- en Communicatietheorie . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elektromagnetisme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 7 13 21 27 29
VAKGROEP SYSTEMEN VOOR ELEKTRONISCHE SIGNAALVERWERKING ........... 33 Leerstoel Elektronische Schakelingen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
VAKGROEP MEET & BESTURINGSSYSTEMEN
............................... 53
Leerstoel Meten en Regelen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Leerstoel Medische Elektrotechniek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Leerstoel Elektromechanica & Vermogenselektronica . . . . . . . . . . . . . . . . . . . . . . . 87
VAKGROEP INFORMATIE & COMMUNICATIESYSTEMEN ........................ 97 Leerstoel Digitale lnformatiesystemen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Leerstoel Automatisch Systeem Ontwerpen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
VAKGROEP ELEKTRISCHE ENERGIETECHNIEK
............................. 129
Leerstoel Elektrische Energiesystemen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Leerstoel Hoogspanningstechniek & Elektromagnetische Compatibiliteit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
BUITEN DE FACULTEIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
-3-
VAKGROEP TELECOMMUNICATIE TECHNOLOGIE & ELEKTROMAGNETISME
-5-
LEERSTOEL RADIO COMMUNICATIE
-7-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.J. Boschma 24 april 1997 Implementation of a spread spectrum receiver with two DSPs in parallel ir. J. Dijk prof.dr.ir. G. Brussaard
Summary:
Within the last ten years, a rapid expansion of mobile satellite communications has taken place. Since 1992 the Telecommunications Division of the Eindhoven University of Technology conducts research into satellite data communicaton using small portable terminals. Such a portable goundstation, with an antenna diameter of several decimeters, is called a picoterminal. The project at the EUT concerns the development of a picoterminal which is capable of data rates in the order of 100 bits per second. It employs direct-sequence code divsion multiple access (DS-CDMA), a spread spectrum technique. For modulation, a digital scheme called binary phase shift keying (BPSK) is used. The final goal of the project is the realization of a network in which a large number of picoterminals share the same frequency band. A design was studied in which 67 picoterminals are implemented, each having a bit rate of 125.2 biUs. A second option provides the usage of 62 picoterminals, each having a bit rate of 63 bin/s. In both network designs, the picoterminals should operate with a chip rate of 64 kchip/s. Currently, the EUT picoterminal project is focused on the development of an all ditgital implementation of a modem. This modem is equiped with two digital signal processors (DSP), the TMS320C50 of Texas Instruments. The resultiing processing power should make the modem capable of functioning as a baseband transmitter or as an IF-receiver which modulates the BPSK-signal by means of coherent subsampling. Chip rates of up to 64 kchip/s are feasable. In this report, a description is given of the software implementation on the baseband modem. The distribution of software tasks over both processors is discussed and calculations concerning critical timings in the software are given as well. For enhancing the frequency resolution of the numerically controlled oscillator on a DSP, a new software algorithm was implemented. Suggestions for improvement of the modem software are given at the end of the report. The IF-receiver as tested by performing bit error ratio measurements for different signal to noise ratios at the input of the reciever. It was measured that the modem has an implementation loss of approximately 1 dB. In order to find an explanation for the implementation loss, an analysis was made to determine the receiver's loss of performance as a result of non-ideal behaviour of the control loops in the presence of noise. Furthermore, the sampling of bandlimited Gaussian noise was investigated.
-8-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
C.P.T. Duivenvoorden 24 april 1997 Fractionally-Spaced Equalization for COMA systems. dr.ir. L. Vandendorpe (UCL, Louvain-La-Neuve, Belgi~) prof.dr.ir. G. Brussaard
Summary:
In order to answer the growth of need for mobile communications, high capacity multiple access systems must be used. Code Division Multiple Access based systems can offer an increased capacity. Inherent in the COMA technology, all system users interfere with each other. This means that the correlation between the signals of the users must be as small as possible. In conventional detectors every user's signal is treated individually. By using joint detection the multiuser detector obtains additional information from the correlation between the signals of the users. The optimum multiuser detector appears to be too complicated to implement in any practical situation. Well designed suboptimum multiuser detectors do not necessarily suffer from this problem. In this project we have studied Fractionally-Spaced Equalization. Fractionally-Spaced Equalizers work at higher sample rate than the chip rate. In this way a channel matched filter is not required. We show how the system parameters are derived, using the Minimum Mean-Square-Error criterion. The performance of the system is judged by its capability to combat the Multiple Access Interference in both high and low noise situations, for various numbers of users. The system is analyzed for singlepath and two-path reception.
-9-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoog leraar:
J.T.C. Duivenvoorden 28 augustus 1997 Analysis of an annular slot antenna for millimetre- and submillimetrewave applications. ir. M.J.M. van der Vorst, dr.ir. M.H.A.J. Herben, dr.ir. P.J.I. de Maagt (ESA/ESTEC) prof.dr.ir. G. Brussaard
Summary:
In order to make use of millimetre- and submillimetre-waves, the dimensions of components become very small and therefore the expenses of high quality manufacturing via conventional precision machining are increasing rapidly. To overcome this problem, successful! attemps have been made in recent years to design planar integrated antennas. For those antenna types the active circuitry and the antenna itself are integrated on the same substrate. Previous studies concerning planar antenna elements, such as a double slot and double dipole, showed that good results can be obtained. However, due to the constraint of the central location of the active element, these antennas are not applicable in all circumstances. In case of a receiving antenna the active element is a detection diode whose size is comparable to the antenna size for certain frequencies. The idea was to solve this problem by the introduction of an annular slot, where the detection diode is placed outside the antenna. The graduation report describes a theoretical analysis of a planar annular slot antenna. Most of the work is based on a model where the ring-slot element is printed on a dielectric half-space. In practice this half-space is synthesized by a dielectric lens coated with a matching layer to reduce reflections. The formulas derived in this report are used to calculate the input impedance and the far-field radiation patterns. The theory for the ring slot antenna printed on a dielectric half-space is based on the introduction of azimuthal modes of a magnetic current density distribution along the slot. The electromagnetic fields at both sides of the annular slot antenna are derived from Maxwell's equations together with application of the proper boundary conditions. Due to the circular geometry of the ring slot, the Hankel transform is used for the analysis. The magnetic current density distribution is determined with the help of Galerkin's procedure which introduces a summation of weighted basis functions. Once the coefficients for the azimuthal modes are known the input impedance and far-field radiation patterns can be obtained. Furthermore, this report deals with the software implementation of the derived formulas for the input impedance and far field radiation patterns. Where possible, the results obtained with the described methods have been verified with previous published results. The annular slot antenna is used as feed for an integrated lens antenna and radiation patterns have been obtained for the complete antenna using a software module written by van der Vorst. The results are compared with those of an integrated lens antenna with a double slot as feed element. For those applications where the size of the detection diode becomes larger than the spacing between the two slot elements, the annular slot antenna is preferred because then the diode can be placed outside the antenna. Otherwise, the double slot is recommended because of its better radioation properties.
-10-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.J.H. Pleijers 28 augustus 1997 Radiometer calibration and retrieval algorithms for passive remote sensing of the atmosphere drs. S.C.H.M. Jongen I dr.ir. M.H.A.J. Herben prof.dr.ir. G. Brussaard
Summary
Microwave radiometers are widely used for the passive remote sensing of the atmosphere. The Radiocommunications Group of the Division of Telecommunication Technology and Electromagnetics possesses two radiometer systems for remote sensing applications. One of these systems, the RESCOM radiometer, was borrowed from ESA-ESTEC and has been used for the CLARA (Clouds And RAdiation) measurement campaigns. The other one, the TUE radiometer, has been designed at the Eindhoven University of Technology. The TUE radiometer 29.8 GHz channel has been calibrated with a hot and a cold load. The behaviour of this Dicke-type radiometer is assumed to be linear over its full range, but that has never been tested before. In this report the linearity of the TUE radiometer has been tested, using the data of the 31.7 GHz channel of the RESCOM radiometer. Linear regression lines have been used in order to find a transformation from brightness temperatures measured with the TUE radiometer to brightness temperatures measured with the RESCOM radiometer. The analysis showed that the assumption that the behaviour of the radiometer was linear, was correct. During the data analysis the data files of the RESCOM radiometer turned out to be corrupted. Before the data files could be used, they first had to be restored. For the retrieval of the integrated water vapor amount V and the integrated liquid water amount L of the atmosphere, the Matched Atmosphere Algorithm has been designed. The version of the algorithm, now in use, is a version based on a look-up structure. This algorithm uses the Microwave Propagation Model of H.J. Liebe, as a model for the propagation of microwave radiation in the atmosphere. This model has been extended with the part describing the attenuation due to rain. For the retrieval of V and L, the brightness temperatures measured with the 21.3 GHz channel and the 31.7 GHz channel of the RESCOM radiometer are used. The data obtained during the CLARA measurement campaigns showed that during periods of clear sky, the retrieved values of liquid water appeared to be negative. This however is physically impossible. There are various possible causes for this problem. One possible cause is the accuracy of the RESCOM radiometer. Variation of the brightness temperatures within the accuracy range of the radiometer showed that the occurrence of negative L-values could not be explained solely by the variations within the accuracy range. The amounts by which the brightness temperatures had to be changed were larger than the accuracy range. Another possible cause was the inaccuracy of the extrapolation procedure used in the retrieval algorithm. For the extrapolation of V and L, the three nearest points in the look-up table that define a plane, are calculated and the plane is used for the extrapolation. A new extrapolation procedure has been designed that is based on a square fit of the points in the reduced look-up table representing thin clouds. With this new extrapolation procedure the retrieval of L still remains negative but is less negative than the retrieval with the three nearest points.
-11-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
N. Scully 11 december 1997 An adaptive algorithm for a GSM base station smart antenna: Selection and DSP-implementation ir. J.R. Schmnidt (KPN Research), dr.ir. M.H.A.J. Herben prof.dr.ir. G. Brussaard
Summary:
Due to the constant increase in the number of GSM (Global System for Mobile communications) subscribers, the capacity of the GSM network must continually be expanded. A new method for realizing capacity increase is the use of smart antennas. A smart antenna is an adaptive array antenna that controls its own radiation pattern. GSM is a mobile phone system based on a cellular network. The number of channels out of the total number of channels available that can be used in a particular cell is limited by interference from other cells. This interference can be reduced by adaptively controlling the radiation pattern of the base station smart antenna. The two main categories of adaptive algorithms are optimum combining algorithms and direction finding algorithms. Optimum combining algorithms adjust the radiation pattern in such a way as to optimize the received signal. An example of an optimum combining algorithm is the Least Mean Squares (LMS) algorithm. A direction finding algorithm can be used to find the direction of the desired signal so that then a beam can be pointed in that direction. An example of a direction finding algorithm is MUSIC (MUltiple Signal Classification). Simulations show that the MUSIC algorithm is able to deal better with multipath that occurs in a GSM situation than the LMS algorithm is. Therefore, the choice is made for using a direction finding algorithm. MUSIC is found to be a more reliable algorithm than the UCA-ESPRIT algorithm, another direction finding algorithm. Simulations show that the MUSIC algorithm can accurately determine the direction. The gain in the carrier-to-interference ratio relative to an omni-directional antenna is typically around 8 dB and mostly in the range 5-15 dB. This result in a capacity increase of 75%. The MUSIC algorithm and the beamforming are implemented on a standard fixed-point DSP (Digital Signal Processor). The results of the implementation show negligible difference with the simulation results. The total time required for direction finding and beamforming for a single TDMA (Time Division Multiple Access) burst is 3.53ms, less than the GSM frame time of 4.6ms. The use of measurement data and a system for realtime processing will test the use of the algorithm in practice. The implementation of direction finding in combination with a directional beam using DSP technology has been completed successfully. However, practical tests have yet to be done.
-12-
LEERSTOEL HALFGELEIDERBOUWSTENEN
-13-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoog leraar:
V.E.S. van Dijk 12 juni 1997 Modelling and Characterization of selfpulsating visible light laser diodes. dr. v.d. Roer prof.ir. G.A. Acket
Samenvatting:
Binnen het kader van het afstuderen bij de faculteit Elektrotechniek van de Technische Universiteit Eindhoven is het afstudeerwerk uitgevoerd bij de afdeling Philips Optoelectronics van Philips Research. Het doel van de opdracht was het realiseren, modelleren en karakteriseren van zelf-pulsaties in zichtbaar Iicht emitterende laser diodes. Rode lasers worden toepast in de nieuwe generatie optical recording systemen (DVD). Het doel van het realiseren van een zelf-pulserende lasers is het verminderen van de terugkoppelruis. In het verleden is het idee ontstaan om de laser te Iaten pulseren met behulp van een verzadigbare absorber laag. In eerder onderzoek heeft men dit idee reeds succesvol gerealiseerd voor 780 nm AIGaAs laser diodes. Een rechtstreekse vertaling is echter niet mogelijk gebleken. Het doel van deze opdracht is dan oak geweest om dit te onderzoeken. Een eerste stap is geweest het realiseren van zelf-pulserende gain-guided lasers. Hierna is er een tweede stap gezet naar het realiseren van zelf-pulserende index-guided lasers. Deze lasers zijn zodanig ontworpen dat het astigmatisme verminderd is, hetgeen een vereiste is voor het gebruik in optical recording toepassingen. Als eerste is er een zogenaamd single-mode model afgeleid voor de zelfpulserende laser diode. Dit model bestond uit drie gekoppelde differentiaal vergelijkingen: 00n voor de ladingdragers in de actieve laag, een voor de ladingdragers in de absorber en een voor de fotonen in de cavity. Deze drie grootheden kunnen echter niet worden gemeten. Wei kan het gemiddelde aantal fotonen en de pulsrepetitiefrequentie worden gemeten. Daarom is het model vertaald naar deze twee fysisch meetbare grootheden. Met behulp van het model zijn er simulaties uitgevoerd. In deze simulaties zijn de karakteristieken van de laser berekend. Met het model zijn simulaties uitgevoerd om de invloed van de vele parameters van het model te onderzoeken. Op deze manier is grondig inzicht verkregen in de fysica van zelfpulserende lasers. De simulatie resultaten werden vervolgens gebruikt bij het opstellen van een design voor de stripe lasers. Experimenten hebben aangetoond dat deze lasers zelfpulseerden. Vervolgens zijn deze experimentele resultaten gebruikt om het model te fitten op de experimenten. Op deze manier is een model verkregen dat het gedrag van andere laserstructuren kan voorspellen. Vervolgens is de stap gewaagd naar het realiseren van zelfpulserende selective buried ridge laser diodes (SBR). Door de extra hoge interne verliezen, vanwege de twee begraven stroomblokkeerlagen (GaAs), is het design aangepast, hetgeen ondersteund is door de simulaties. De aanpassing was het verhogen van de mode confinement van de absorber laag. Er is gebleken dat de gerealiseerde SBR structuren pulseren bij kamertemperatuur (25(C), maar de performance van de SBR-Iasers (met en zonder extra waveguide) lag ver beneden verwachting. Dit kan verklaard worden door het fenomeen lekstroom. In rode lasers heeft de lekstroom een zeer lage activeringstemperatuur. Bij kamertemperatuur speelt de lekstroom daarom al een zeer grate rol. Een deel van de lekstroom komt oak in de absorber laag terecht, hetgeen de absorptie doet afnemen en daardoor oak de neiging tot zelfpulsatie. Door het implementeren van een extra waveguide om de absorber wordt dit deel van de lekstroom aileen nag maar grater. Uit de experimentele resultaten wordt dan oak geconcludeerd dat het groeien van een extra waveguide geen goede oplossing is voor het realiseren van zelf-pulserende lasers met een goede performance. Er wordt dan oak geadviseerd, dat de absorber in de n-cladding gegroeid moet worden, omdat de galen-component van de lekstroom veel kleiner is. Wei moet er een extra hoge dotering in de absorber toegepast worden, om de levensduur van de gaten voldoende laag te houden.
-14-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
F. de Bruyn 13 februari 1997 New sealing method for DBRs in polarization controllable intracavigty contacted VCSELs prof.dr. G.A. AckeUdr.ir. T.G. van de Roer/ dr. F. Karouta/ir. M.P. Creusen prof.dr.-ing. L.M.F. Kaufmann
Summary:
Worldwide, lot of research in the field of semiconductor diode lasers has been conducted to the Vertical Cavity Surface Emitting Lasers (VCSELs) the last five years. This new type of leaser has several advantages compared to the conventional semiconcuctor laser. • The vertical low divergent cicular output beam improves the fiber coupling efficiency. • Realization of two-dimensional laser arrays is possible. • Built-in mirrors make surface treatment of cleaved facets superfluous. At the Eindhoven University of Technology an intra-cavity contacted VCSEL has been developed. Three major aspects of the intra-cavity contacted VCSELs are treated in this master's thesis: • Fixing of the polarization direction for VCSELs in general and for intra-cavity contacted VCSELs in particular. • Wet selective oxidation of AlxGA 1_xAs-layers which is used for confining of the current in the intra-cavity contacted VCSEL. • Protection of AlAs-layers in the top distributed bragg reflector (DBR) of intra-cavity contacted VCSELs against wet selective oxidation during the fabrication of the current confinement. All of the polarization fixing schemes rely in some manner on breaking the symmetry in the plane of the quantum wells. With different degrees of success, these methods promote a single polarization state for the first order transversal mode, but at the expense of increased fabrication complexity. For the intra-cavity contacted VCSEL several mechanisms for fixing the polarization can be implemented. Options are: growth/fabrication of a grating on top of the top-DBR, applying anisotropy in cavity shapes, designing asymmetric active regions, epitaxial growth on misoriented substrates and most importantly the oxide confined VCSELs. Since the oxidation rate of AlxGA1.xAs strongly increases for increasing Al-content in the compound, the wet oxidation is very selective. The oxides are mechanically as well as thermally very stable and are determined to be a cubic structure of y-AI 20 3 in a polycrystalline phase. Where vertical oxidation of AlxGA 1.xAs tends to show a diffusive character and even saturation for longer oxidation times, the lateral oxidation stays reaction-limited. This is explained by the originating of 'canals' along the oxide/semiconductor interface due to the contraction of the oxide. These 'canals' take care of the fast transport of the oxidizing species to and from oxidation front. When alternately AIAs/GaAs layers are used for the DBR, these AlAs layers have to be protected against the oxidation. The initially used method of depositing thick silicon nitride does not yield good results for smaller devices. A sealing method is proposed by Huffaker et al. and involves a rapid thermal annealing step of the uncovered mesa in forming gas to a temperature of circa 550 °C. This method works well for wet-etched mirrors, but fails for dry-etched mirrors. A new sealing method has been developed and is based on this 'Huffaker-method'. Where smooth interfaces are obtained after wet-etching, it is known that dry-etching is more aggressive revealing corrosive and contaminated side walls due to a sputter-effect. In the new sealing method the corrosive and contaminated side walls are 'cleaned' by a dip in a diluted wet-etch solution. An annealing step afterwards yields an effective seal.
-15-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoeksthema: Begeleiding: Afstudeerhoogleraar:
H.A. Langeler Rapprtnr.: TTE-EEA 540 28 augustus 1997 Vervaardiging en karakterisatie van lasers met natchemisch ge~tste spiegels monolitisch geTntegreerd met een fotodetector. 111-V Compound Semiconductor Devices for Opto-Electronic Applications (EEA 24) prof.dr. G.A. Acket, dr. F. Karouta prof.dr.-ing. L.M.F. Kaufmann
Samenvatting:
Lasers met natchemisch ge~tste spiegels (wet chemically etched mirror lasers, WCEML's) zijn optische golfgeleiders met versterking, die aan beide zijden zijn afgesloten met spiegels. Bij WCEML's worden de spiegels verkregen door natchemisch etsen in plaats van klieven. WCEML's zijn om twee redenen interessante elektronische bouwstenen: 1: WCEML's kunnen veel korter worden gemaakt dan conventionele halfgeleiderlasers met gekliefde spiegels. Dit is van belang voor 2: de monolitische integratie van WCEML's met andere componenten, zeals bijvoorbeeld transistoren of fotodetectoren. De leerstoel Elektronische Bouwstenen van de vakgroep Telecommunicatie Technologie en Elektromagnetisme heeft een universeel proces ontwikkeld ter vervaardiging van WCEML's met lengtes vari~rend van 750 pm tot 7 pm (ontwerpmaten op de maskerset). Met dit universele proces kunnen lasers met diverse lagenstructuren worden vervaardigd. Na optimalisatie van de eerste twee processtappen zijn GaAs/AixGa 1_xAs WCEML's met twee en drie quantumputten in de actieve laag vervaardigd, alsmede WCEML's met een bulk actieve laag GaAs van 50 nm dik. Voor het vervaardigen van WCEML's zijn, na de epitaxiale groei van de lagenstructuur, vier lithografie-stappen nodig: 1: etsen van de "ridge"-golfgeleider en depositie van een isolatielaag van siliciumnitride 2: aanbrengen van de p-contact metallisatie 3: maken van openingen in de isolatielaag voor de spiegels 4: natchemisch etsen van de spiegels. Na deze stappen wordt de plak gepolijst en wordt de n-contact metallisatie aangebracht. Klieven van de plak volgt dan als laatste stap om de devices te separeren t.b.v. de karakterisatie. Korte lasers (korter dan 200 pm) met twee of drie quantumputten vertonen lichtemissie op meerdere golflengten liggend tussen 780 nm en 860 nm. Dit kan worden verklaard door het feit dat de energie van elektronen in de geleidingsband en gaten in de valentieband in de quantumputten in discrete niveaus is opgedeeld. Ook is een WCEML monolitisch geTntegreerd met een fotodetector, hetgeen een extra lithografiestap vereist. Zowel de laser als de detector hebben een bulk actieve laag. Om te voorkomen dat er Iicht van de laser in de laser wordt teruggekoppeld, is de wand van de detector die het dichtst bij de laser zit met een natchemisch etsproces wat hellend gemaakt, waardoor het Iicht dat erop valt wordt weggekaatst van de laser. Omdat de detector dezelfde golfgeleiderstructuur heeft als de laser wordt er nog steeds Iicht in de detector gekoppeld. Uit metingen aan de ge'integreerde laser met detector is gebleken dat de fotostroom van de detector vrijwel lineair samenhangt met het uitgestraalde optische vermogen van de laser en dat de laser nauwelijks last lijkt te hebben van de aanwezigheid van de detector. Het optisch vermogen van de laser kan dus worden gemeten met de geTntegreerde monitordetector. Dit van belang voor toepassingen waarbij het laservermogen gestabiliseerd dient te worden gehouden. De gestabiliseerde laser kan worden gebruikt als referentiebron in een coherent demodulatiesysteem en als zender in optische communicatiesystemen voor korte afstanden.
-16-
J.M.C.P. van Meer Rapportnr.: EEA 542 16 oktober 1997 Design, fabrication and characterisation of a 20 GHz low phase noise oscillator. ir. De Raedt (IMEC) Begeleiding: Afstudeerhoogleraar: prof.dr. L.M.F. Kaufmann
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
Samenvatting:
Toekomstige mobiele telecommunicatiesystemen vereisen receivers en transmitters die kunnen functioneren op micro-golf of zelfs op millimeter-golf frequenties. Dergelijke systemen kunnen worden gerealiseerd door gebruik te maken van bijvoorbeeld 111-V halfgeleiders zoals GaAs en lnP. Snelle transistoren zoals de High Electron Mobility Transistor (HEMT) kunnen worden gefabriceerd op een dergelijk substraat en ge"integreerd met passieve componenten, resulterend in een Monolithic Microwave Integrated Circuit (MMIC). De laagfrequente ruis in deze HEMTs vormt echter een belangrijke beperking voor de toepassing van deze actieve devices in niet-lineaire circuits zoals mixers en oscillatoren. De laag-frquente ruis in de HEMT wordt gemoduleerd door het 'informatie bevattende' hoog-frequente signaal. Dit resulteert in ongewenste amplitude en frequentie modulatie of fase-ruis. Dit afstudeerwerk beschrijft de modellering van de laag-frequente ruis in lnP gebaseerde HEMTs, die gefabriceerd zijn in de MMIC tychnology van IMEC. Deze ruis-modellering maakt het ontwerpen van niet-lineaire circuits mogelijk waarbij een hoge signaal-ruis verhouding vereist is, zoals bijvoorbeeld in receivers en transmitters. Een op metingen gebaseerd, spanningsafhankelijk laagfrequent ruis-model is ge"implementeerd in het reeds aanwezige niet-lineaire model dat is gerelateerd aan de 'in-house' lnP gebaseerde HEMT technologie. Het ruis-model en het nietlineaire model zijn beide ge"implementeerd in de simulator sectie van HP Microwave Design System. Vervolgens, om de toepasbaarheid van dit ruis-model te verifi~ren, is een 20 GHz MMIC oscillator ontworpen. Bij elke stap in het antwerp is voortdurend rekening gehouden met het niveau van de fase-ruis om zo tot een antwerp te komen met een minimaal ruis niveau. Behalve de layout van het uiteindelijke oscillator circuit, zijn ook de afzonderlijke sub-circuits op de zogenaamde 'maskerlayout' geplaatst om op die manier het karakteriseren van elk onderdeel van het volledige circuit achteraf mogelijk te maken. Deze masker-layout vormt het startpunt voor de fabricatie van de optische maskers welke vereist zijn voor de fabricatie van de oscillator. Het fase-ruis spectrum van de gerealiseerde elementaire oscillator is gemeten en vergeleken met het ruis spectrum verkregen uit de simulaties. Dit leverde een gemeten fase-ruis niveau op van -77.1 dBc/Hz versus een gesimuleerd fase-ruis niveau van -83.6 dBc/Hz op een frequentie van f = 100 kHz van de carrier. De bereikte resultaten tonen aan dat de modellering en implementatie van het laag-frequente ruisodel correct is uitgevoerd. Door gebruik te maken van dit model is men in staat om het fase-ruis gedrag van elk niet-lineair circuit te voorspellen, dat gefabriceerd is in de MMIC technologie van IMEC.
-17-
F.J.M. Wennekes Rapport nr: EEA541 11 december 1997 Characterisation of Wafer Fused lnP/GaAsheterojuctions dr. K. Streubel, Royal Institute of Technology (KTH), Stockholm, Zweden dr. M.Hammar, KTH, Stockholm, Zweden Afstudeerhoogleraar: prof.dr. L.M.F. Kaufmann
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
Summary:
The aim of this study is to investigate the suitability of lnP to GaAs Wafer Fusion in optoelectronic devices. Wafer Fusion is a new technology employed in the integration of lattice mismatched materials. lnP to GaAs wafer fusion has mainly been developed for the use in long wavelength (1.3 or 1.55 m) Vertical Cavity Surface Emitting Lasers (VCSELs). At these wavelengths it is difficult to find high reflecting mirrors lattice matched to the lnP based active region. To solve this problem lattice mismatched GaAs Distributed Bragg Reflectors (DBRs) are wafer fused to the active region. The effect of the n-doping concentration on the junction resistance has been studied. Complementary measurements with Secondary Mass Spectroscopy (SIMS) to analyse the dopant distribution has been performed. SIMS has also been used to search for contaminants at the interface. The electrical measurements are fitted by a theoretical model developed by Dr. Joachim Piprek at the University of California in Santa Barbara. The effect on the electrical properties is investigated for three doping concentrations in the GaAs side, i.e. 2e17, 1.5e18 and 6e18 cm-3, and two in the lnP side, i.e. 9e17 and 6e18 cm-3. No significant influence of the lnP doping concentration can be observed. The electrical conductivity improves for higher doping levels in the GaAs side. It is assumed that the optical losses of a fused interface has a minor importance in the VCSEL device. On the other hand, the heat treatment in combination with the high pressure can give a disordering in the fused structures, which can degrade the optical properties. In the second part of the study the electrical conductivity of the junction has been investigated in fusion experiments at different temperatures, in order to find a minimum fusion temperature with good electrical properties. Above 550 oC the electrical conductivity shows no significant changes. Below this temperature the conductivity degrades progressively. The effect of fusion on the quality of the quantum wells in an active layer, designed for a double fused VCSEL, was investigated by Photoluminescence (PL) analysis. The PL intensity fluctuates over the same sample after fusion. Probably due to the fusion process or substrate etching, the quantum wells are damaged. A disordering of the quantum wells during the fusion process would give a broadening of the PL signal. This can not be noticed. Finally, a pilot process of a bottom emitting single fused VCSELs was performed.
9
-18-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
F.W. Ahlrichs Rapportnr.: TTE 544 11 december 1997 Simulation and optimization of a polysilicon emitter BICMOS process ir. Som Nath, ir. A. Heringa (Philips Semiconductors Nijmegen) prof.dr. F.M. Klaassen
Summary:
Analysis of process fluctuations is critical in developing manufacturable technologies with minimal variations in electrical characteristics and in the determination of worst-case process descriptions. Physical device and process modeling can be used to identify and reduce the variations in the electrical characteristics due to fluctuations in processing. To perform such analysis correctly for the first-order transistor parameters of physical phenomena like bandgap narrowing, intrinsic carrier concentration, minority hole and electron mobilities, Auger and Shockley-Read-Hall recombination and the modeling of the polysilicon-single-crystalline silicon interface are critical. Inconsistent use of these parameters in the device simulations leads to incorrect design and interpretation of experiments. This project describes the process and device simulations for a polysilicon emitter bipolar transistor used in a BICMOS process. In the modeling the threshold adjust boron implantation appears to have much effect on the electrical behaviour of the bipolar part of the process. Finally acceptable simulation results were found taking all earlier mentioned effects into account. Furthermore, for statistical modeling purpose it is necessary to predict the sensitivity of an electrical parameter to variations in process condidtions such as the base implant dose. Once the process and device simulation showed acceptable results a statistical experiment was done to check the variations in the crititcal process parameters such as the base implant dose and energy, the etch back of silicon underneath the polysilicon and the temperature of the RTA emitter drive in. From this experiment, sensitivity curves were found which predict the fluctuation of the electrical paramters to variations in the above 4 process paramters, which will be very helpful for process engineers in a production environment.
-19-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoog leraar:
M. Bruijsten Rapportnr.: EEA 543 11 december 1997 TCAD simulation of Latch-Up in 0.5 micron CMOS on Epi and bulk substrates ir. F. Huisman (Philips Semiconductors, MOS4you) prof.dr. F.M. Klaassen
Summary:
The 0.5-0.35 micron CMOS technologies used at the MOS4you division of Philips Semiconductors Nijmegen are processed on silicon wafers consisting of a thin, relatively low doped epitaxial layer on a highly doped substrate. A significant wafer cost price reduction can be obtained by omitting the epitaxial layer and processing the CMOS devices on bulk wafers. Although beneficial from a cost point of view, processing on bulk wafers will negatively affect amongst others the latch-up performance of the circuit. We have used TCAD simulations to assess the latch-up characteristics of a CMOS device on both epi and bulk wafers. In particular the effect of a Buried Implanted Layer for Lateral Isolation (BILLI), which has been claimed to result in epi equivalent latch-up performance on bulk wafers, will be addressed. TSUPREM-4 is used to create CMOS structures and MEDICI is used to extract their latch-up performance. The CMOS technology as used in the simulations resembles the actual process flow as closely as possible. To this end, several modifications to an existing input deck of TSUPREM-4 have been implemented. Contact salicidation and substrate biasing appeared to be critical issues. The latch-up performance can be characterized in particular by the holding voltage and the trigger current. A comparison of measured and simulated data shows a good and quantitative agreement for the holding voltage, but only a qualitative agreement for the trigger currents.
-20-
LEERSTOEL ELEKTRO-OPTISCHE SYSTEMEN
-21-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoog leraar:
W.B. Bakker 12 juni 1997 "Modeling and Design of Cascaded Bi-directional EDFAs in Tree-andBranch WDM Local Access Networks" drs. F.W. Willems (Lucent Technologies) dr.ir. H. de Waardt (vakgroep TTE) prof.ir. G.D. Khoe
Summary:
Introduction of broadband interactive services in lightwave networks with high splitting ratios often requires both upstream and downstream optical amplification to overcome the considerable losses present in the network. This report concentrates on the requirements for optical amplifiers as used in the ACTS AC028 TOBASCO (Towards Broadband Access Systems for CATV Optical Networks) architecture, where interactive services are introduced by adopting a High Density WDM Technique. The location of the amplifying elements needs careful investigation, since cascading of amplifiers in a split network can lead to a strong accumulation of the Amplified Spontaneous Emission in the upstream direction. In the TOBASCO architecture, optical amplification is done by means of Erbium-doped fiber amplifiers (EDFAs). We use the steady-state, spectrally resolved Giles model to model the gain and noise performance of cascaded EDFAs in tree-and-branch networks. The overall performance of the network is determined using a Gaussian approximation for the bit-error rate and the associated receiver sensitivity for a lightwave system incorporating optical amplifiers. Many possible amplifier locations have been investigated. For minimal network losses, the use of a single EDFA turns out to be sufficient for proper reception of the signals. However, the network losses increase throughout the years due to ageing of components, so two EDFAs have to be used to ensure proper operation of the network in a certain time span. Given the requirements for the TOBASCO architecture, the optimum location of the amplifiers is in the local splitting centre, with one amplifier before the splitter, and the other after the splitter. In most cases, noise optimization is not necessary. However, distributing most of the gain to the amplifier before the splitter leads to optimal noise performance.
-22-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M. Blum 11 december 1997 Volledig optisch neuron voor een laser neuraal netwerk. dr. Schleipen prof.ir. G.D. Khoe
Summary:
The work is part of a Laser Neural Network project that aims at realising a neural network in optics. A possible application for such an optical neuralnetwork is a fully optical cross connect in a telecommunication network. The injection seeding neuron is a new concept for a fully optical neuron, the basic element of a neural network. In the thesis the concept of the injection seeding neuron is described and verified both theoretically and experimentally. The injection seeding neuron realises the non-linear function, necessary for the implementation of a neuron, by injecting light into a semiconductor laser. The output power of the laser shows a sigmoid like behaviour as function of the injected power. When this injected power is made proportional to a weighed sum of inputs, an optical neuron is realised. By applying controllable feedback to the laser, the threshold of the neuron can be influenced. Simulations are done to examine the behaviour of the injection seeding neuron theoretically. The model used for these simulations is the multi-mode rate equations model. Procedures to find the steady state solution and the time dependent solution are described. The steady state solution is used to examine the non-linear function of the neuron. The results from these simulations are consistent with the principle of operation. An experimental setup is built to verify the principle ofoperation of the injection seeding neuron. A source laser is wavelength tunable and provides the signal to be injected in the neuron. A neuron laser has controllable feedback for each mode of the laser diode individually and a way of injecting external light. The experiments done with this setup show that the injection seeding neuron indeed shows a sigmoid like behaviour, but further experiments are necessary for a complete verification of the operation principle. A more stable experimental setup is necessary for future experiments and recommendations are made on how this can be achieved.
-23-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.P.G. Borger 24 april 1997 Stabilization, synchronization and management of the wavelength in a WDM network. dr.ir. H. de Waardt, ir. J.C. van der Plaats prof.ir. G.D. Khoe
Summary:
Today's existing hybrid fiber/coax networks offer great possibilities for upgrading to a broadband network that can provide interactive services like LAN emulation, fast internet access and video conferencing to subscribers at home. In the European research program TOBASCO (TOwards Broadband Access Systems for Catv Optical networks) a WDM hybrid fiber/coax network is envisioned which is capable of delivering over 2 MbiUs to every subscriber using Wavelength Division Multiplexing techniques. Wavelength Division Multiplexing is used to divide the immense bandwidth of optical fibers into channels on which different signals are multiplexed. The use of WDM however has the disadvantage that each laser wavelength has to be synchronized with it's corresponding WDM channel. This graduation report describes the factors that can cause wavelength mismatch in a WDM network and shows how their effect can be minimized. In the TOBASCO project, DFB lasers are directly modulated with an STM-4 data stream. However, direct modulation of DFB lasers causes a wavelength variation in the output signal. This variation, called chirp, has been analyzed analytically as well as experimentally. To multiplex the different transmitter signals on to a single fiber, phased array multiplexers are used. These devices cause crosstalk between the different channels. The influence of this crosstalk as well as the influence of wavelength mismatch of these devices on the bit error rate of the system have been analyzed and measured. Finally, the influence of changes in temperature of the lasers and multiplexers was investigated. It has been shown that temperature variations should be kept to a minimum because the lasers wavelength and the passband of the multiplexers are strongly temperature dependent. Therefore, hard- en software has been developed to keep the temperature of all devices within in a small margin of the wanted value.
-24-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.l. Sagir 28 augustus 1997 A full duplex ASK Subcarrier Modulated FM Single Transceiver. ir. H.P.A. v.d. Boom, ing. P.K. v. Bennekom prof.ir. G.D. Khoe
Laser
Summary:
A full duplex ASK subcarrier modulated FM single laser transceiver using coherent heterodyne detection has been investigated. In the configuration no external modulators are used and channel separation is done by using electrical subcarriers with different frequencies. The single laser transceiver (only one laser for one transceiver) combines the advantages of coherent detection (higher selectivity and sensitivity) with the advantage of the direct detection. After designing of the transmitter and the receiver for the single laser transceiver, the simulations concerning the defined system are done by using the simulation package called SPOCS. The received signal power at a BER of 10-9 and for a bitrate of 34 Mbits/s is -36.7 dBm for channel using a subcarrier frequency of 700 MHz and -35.13 dBm for channel using a subcarrier frequency of 900 MHz, when only one channel is active at the moment. This difference can be accredited to the bandpass filter which is used to separate the channels, because the other parameters which are used in simulations are the same for both channels. The spectra at the different nodes of the receiver are also simulated. The single laser transceiver for two channels is realized by using one laser. The number of channels can be expanded by using extra subcarriers in the transmitter side. In the realized configuration there is only one way communication possible because only one laser is modulated. The spectra are measured at the different points of the transceiver. After comparing the simulated and measured spectra after the photodiode in the receiver it appeared that the simulated spectrum had no components at the subcarrier frequencies. This difference might be found in SPOCS which takes no direct detection components in account by the simulations of the spectra. Finally the eye diagrams for channels are measured at the output of the receiver. From this it appeared that the eye diagram for channel using a subcarrier frequency of 700 MHz seemed to be better than that one for channel using a subcarrier frequency of 900 MHz. This difference occurred because the frequency modulation index of channel 1 is higher than that one for channel 2. It means that the signal for channel 2 at the input of the ASK demodulator has a lower amplitude than that one for channel 1. As ASK demodulator a passive mixer is used, which requires high input power. This requirement is not perfectly satisfied for channel 2. For this reason the eye pattern for channel 1 looks better than that one for channel 2.
-25-
LEERSTOEL INFORMATIE- EN COMMUNICATIETHEORIE
-27-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: VF-programma/ Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
D.A. Kart 11 december 1997 Shannon strategies for the binary multiplying channel. Coding for networks ir. H.B. Meeuwissen prof.dr.ir. J.P.M. Schalkwijk
Summary:
In 1960, Shannon introduced the two-way channel, and the problem of determining its capacity region. In other words, he defined the problem of communicating simultaneously in both directions over the two-way channel as effectively as possible. Many practical communication channels are intrinsically two-way channels. However, the engineer usually splits them up into two independent one-way channels by using a technique such as time sharing. This thesis investigates the capacity region C of the binary multiplying channel (BMC) by determining the normalized Shannon inner bound regions an of its derived channels Kn, n = 1,2, ... , where Kn corresponds to transmitting blocks of fixed length n over the BMC. The normalized Shannon inner bound regions approach C as the value of n increases, i.e. limn-.. an = C. We investigate the normalized Shannon inner bound region for Kn by searching for the best possible equal rate strategy for the BMC with blocks of fixed length n. Such strategies are called Shannon strategies. They can be visualized as resolutions of the unit square. We show that finding the best Shannon strategy for Kn corresponds to a bound-constrained optimization problem over 3n - 1 variables that can take values between 0 and 1. First, we present an algorithm that automatically generates the expression for the half-sum rate for Kn. Second, we optimize this expression by using the NEOS Server that can be used to solve bound-constrained optimization problems remotely over the Internet. Previous Shannon strategies for Kn were found by random searches. Since the number of local maxima is very large, we have chosen a different approach. We use our knowledge of coding strategies for the BMC to estimate starting-points for the optimization procedure of the NEOS Server for which we expect high half-sum rates. The strategies for K1 - K3 found so far are generally believed to be optimal. However, the best known strategy for K4 raised some doubts, since it is asymmetrical, i.e. both terminals operate at different rates. Prior to this thesis, it was believed that the best strategy for any Kn, n E IN, had to be symmetrical. Therefore, our first goal was to find a better symmetrical strategy for K4 . We did not succeed, and started to believe that it did not exist. So we returned to K3 . In fact, we investigated the two complementary asymmetrical strategies for K3 that are not optimal. We discovered that the unit square resolution by these strategies show something unexpected, i.e. they appear to be symmetrical but they contain one subshape that is divided in an asymmetrical way. This result gained importance, the moment we observed that a similar shape also appears in the best known symmetrical strategy for K4 • We seperately investigated this symmetrical shape, and concluded that the best division of this shape is indeed asymmetrical. Then, we used this result to improve the symmetrical strategy for K4 , i.e. we submitted the problem, with a starting-point based on our observation, by e-mail to the NEOS-server, and obtained a new and better asymmetrical strategy for K4 • In conclusion, we conjecture that the best strategy for K4 is asymmetrical, this implies that the convex hull in the definition of the Shannon inner bound region is essential. In addition, it seems that the best equal rate strategy for any Kn, n E IN, is not necessarilly sum metrical. The thesis concludes with some questions that might be interesting for future research.
-28-
LEERSTOEL ELEKTROMAGNETISME
-29-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.C. van Beurden 16 oktober 1997 Analysis of infinite phased arrays of printed antennas. dr. A.B. Smolders (Hollandse Signaalapparaten), dr. M.E.J. Jeuken prof.dr. A.G. Tijhuis
Summary:
In this report, the infinite array model is used to analyse large arrays of tapered-slot antennas. These antennas are famous for their wide bandwidth and wide scan range in the H-plane. Due to the infinite array model, the analysis can be restricted to one unit cell only. The electromagnetic field in the unit cell is analysed using the method of moments including the exact Green's functions of the unit cell. Therefore, mutual coupling effects are automatically included. The model is applied to analyse the effects of metallic walls on the E-plane scan behaviour of these arrays. It is shown that metallic walls, placed on the four sides of the unit cell, have a positive effect on the scan behaviour. However, when grating lobes enter the array, blind scan angles are likely to occur. This has also been observed in experimental data. The infinite array model is also used to examine arrays, which contain two radiating elements perpendicular to one another inside the unit cell. In this way, it is expected that the polarization properties of the electric field can be controlled. The results indicate that scanning in the principal plane is very well possible. Unfortunately, for scanning in the diagonal planes strong coupling is observed between the two elements inside the unit cell. Finally, a model to analyse single polarized arrays with a triangular grid is presented. This model has been used to design an array of so-called bunny-ear antennas. It is shown that the scan behaviour improves when the elements are placed closer to one another.
-30-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: VF-programma/ Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
E. Hosea 16 oktober 1997 Een Hybride Beschrijving van lnhuis Elektromagnetische Velden EM-10 dr.ir. W.M.C. Dolmans, dr.ir. M.E.J. Jeuken, prof.dr. A.G. Tijhuis
Samenvatting:
In dit verslag wordt een nieuw model ontwikkeld om het elektromagnetische veld in een kamer te kunnen bepalen, waarbij het effect van obstakels op het veldpatroon wordt meegenomen. De eerder gevonden reciprociteit methode kan worden gecombineerd met de hybride techniek, die de efficientie van een analytische methode verenigt met de flexibiliteit van een numerieke methode. Het elektrische veld op een punt in de kamer wordt geschreven als een integraal over het oppervlak dat de bron en het obstakel omsluit. Dit oppervlak geeft dus de grens aan tussen het gebied waar aileen de analytische methode toegepast kan worden (het gebied buiten het omsluitend oppervlak) en het gebied waar ook de numerieke methode toegepast kan worden (het gebied binnen het omsluitend oppervlak). De gebruikte numerieke methode is de finite-difference time-domain methode (FDTD). De gevonden uitdrukking voor het veld wordt verder softwarematig geTmplementeerd. De reciprociteit methode wordt toegepast in verschillende gevallen. Om de betrouwbaarheid van deze methode te kunnen testen wordt eerst aileen de analytische methode gebruikt om de veldberekening uit te voeren. Uit het resultaat blijkt dat de reciprociteit methode heel goed werkt voor de vrije ruimte. Bij toepassing in een lege kamer worden ook goede resultaten verkregen, zolang de kamer niet te resonant is. Dit kan worden gerealiseerd door de kamer geheel met dielektricum te vullen. Bij toepassing voor een obstakel in een kamer wordt de FDTD methode gebruikt om de veldberekening binnen het omsluitend oppervlak uit te voeren. Dit resulteert in een hybride FDTD/modale veldberekening.
-31-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
S.H.J.A. Vossen 24 april 1997 Mutual coupling between a wire antenna of finite conductivity and a large object. dr. Zwamborn (TNO-FEL) prof.dr. A.G. Tijhuis
Summary:
To obtain a better insight in the electromagnetic interaction between an antenna and a dielectric body, for instance between a GSM telephone and a human head, the full three-dimensional electromagnetic-wave equation is solved numerically. The analysis is carried out in three steps. First, we derive a general integral representation for the magnetic and electric field quantities. This is done for electrically impenetrable as well as for inhomogeneous, lossy dielectric objects. A general form to represent the mutual coupling between two objects follows in a straightforward manner from this representation. The integral equations thus obtained are referred to as the coupling equations. These equations are solved with the aid of the the Weak Conjugate-Gradient Fast-Fourier-Transform (WCGFFT) method. To validate our approach, we then carry out a pilot study, in which we consider the mutual coupling between two wire antennas of different length. In this problem we have an active wire antenna, which is driven by a delta-gap voltage source and a passive wire antenna, which is driven by the field radiated from the active one. In our model, we also account for the finite conductivity of the two mutually coupled wire antennas. To this end, an approximation of the transverse distribution of the current is introduced, which is turns out to be quite accurate. Finally, results are presented for the antenna-body problem in its simplest form, namely for a radially layered dielectric sphere and a dipole antenna. The extension of this model to more complex structures will be the topic of future research.
-32-
VAKGROEP SYSTEMEN VOOR ELEKTRONISCHE SIGNAALVERWERKING
-33-
LEERSTOEL ELEKTRONISCHE SCHAKELINGEN
-35-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.F.M. Funken 28 augustus 1997 Implementation of a Transform Based Audio Encoder dr.ir. P.C.W. Sommen (TUE) I ir. P.H.A. Dillen (Philips) ing. F.M.J. de Bont (Philips) prof.dr.ir. W.M.G. van Bokhoven
Summary:
The digital multi-channel audio market is growing rapidly these days. The increasing bit rate requires large amounts of data space. Compression is desired to reduce the required space. Audio coding offers a method of compression by employing the masking phenomenon. Transform coding is one type of audio coding. There's a request for knowledge about this kind of coding. To become familiar with transform coding a software encoder is implemented. The general operation of such an encoder is a transformation from time-domain sampled audio to frequency domain coefficients. The frequency coefficients are separated into exponents and mantissas. The mantissas are stored with limited accuracy. The bit allocation process determines how many bits are spent to each mantissa. The quantiser performs the actual rounding step. Finally the exponents and mantissas are packed into the output bit stream together with all required side information. Differentiation is made between two bit allocation methods: forward adaptive and backward adaptive bit allocation. The former transmits the entire bit allocation in the coded bit stream, the latter recomputes it in the decoder from the audio data. The output of the encoder may have a fixed bit rate or a variable bit rate. The encoder implemented here includes some unique features, especially made for research purposes. A subjective audio quality test has been performed. It shows up that the encoder implemented here seems to have comparable performance to functionally similar encoders available in the market. The audio encoding system used here combines a relatively simple coding algorithm compared to other audio codecs with reasonable compression gain. The coding system for which the encoder was implemented seems to be a serious competitor to others in the market.
-36-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
M. Gebeyehu Rapportnr.: ESP-18-97 16 oktober 1997 Algorithms and implementation of an adaptive filter for a quality surveillance system Digitale Communicatie dr.ir. A.C. den Brinker prof.dr.ir. W.M.G. van Bokhoven
Summary:
In this thesis the replacement of the analog front end of a quality surveillance system by a digital one is studied. The requirements for this digital system are that it has a performance at least as good as the existing system and that it is more flexible. In order to attain these goals a signal model is proposed and confronted with the actual data. The existing system is translated to the digital domain. It is shown that adaptive systems can be used but only with due care since the signal model does not agree with the signal model for which adaptive systems are intended. A theoretical analysis of one-parameter adaptive filters is presented extending the existing analysis to the case of nonwhite input and/or reference signals. Finally, a DSP implementation is established as a discrete counterpart for the existing system. Using a DSPsystem for the implementation, the proposed system is inherently more flexible since the software can be easily adapted.
-37-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
Eric ten Haaf 24 april 1997 ir. C. Moons Katholieke Universiteit Leuven dr.ir. L.K.J. Vandamme prof.dr.ir. W.M.G. van Bokhoven
Samenvatting:
In het kader van het Erasmus-programma is mij een project aangeboden aan de Katholieke Universiteit Leuven (Belgie). Het project bestaat uit twee delen. Het eerste deel bestaat uit het ontrafelen van het operatingsysteem Windows NT. Hierbij wordt speciaal gekeken naar de dataacquisitiemogelijkheden. Windows NT blijkt standaard aileen geschikt te zijn voor "soft"-realtime operations. Dit heeft beperkingen voor de maximale bemonsteringsfrequentie (max. 200Hz). Voor hogere bemonsteringsfrequenties zijn "hard"-realtime operations noodzakelijk. Echter zonder additionele software is het niet mogelijk om "hard"-realtime operations uit te voeren. Het tweede deel van het project bestaat uit het ontwerpen van een data-acquisitie programma voor regeling en sturing van een Continue Variabele Transmissie testbank. Dit geheel moet werken onder het voorgenoemde operatingsysteem Windows NT. Het data-acquisitie programma bestaat enerzijds uit het visualiseren van aile signalen en anderzijds uit het regelen van de verschillende systemen. De gehele testbank kan geregeld worden met een bemonsteringsfrequentie van 61Hz. Daarom is de "soft"-realtime operation van Windows NT uitermate geschikt. De CVT is een geavanceerde volautomatische versnellingsbak waarvan de overbrengingsverhouding variabel is, in tegenstelling tot een klassieke automatische versnellingsbak. In de testbankopstelling wordt de CVT aangedreven door een elektrische motor. De motor wordt bestuurd door een frequentie-regelaar, waarin de koppel-toerenkromme van een automotor geprogrammeerd is. De overbrengingsverhouding van de CVT wordt bepaald door twee uitgangsdrukken die de stand van twee conische schijven (poelies) regelen waartussen een metalen riem gespannen is. Een derde uitgangsdruk zorgt voor aansturing van de koppelingsplaten. Het data-acquisitieprogramma leest de verschillende drukken, toerentallen en de temperatuur van de olie in. Deze gegevens worden gebruikt voor regeling van de oliedrukken en toerentallen. De regeling bestaat uit PID-regelaars met Feedback Linearisation. Door het toepassen van Feedback Linearisation wordt een verbetering in performansie van zo'n factor 5 verkregen. Tevens is het gedrag stabieler dan bij toepassing van een gewone PID-regelaar.
-38-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
C.Lin 12juni 1997 Design and implementation of SHANNI:a Stand-alone Hybrid Artificial Neural Network Implementation dr.ir. J.Hegt prof.dr.ir. W.M.G. van Bokhoven
Summary:
The primary goal of this thesis work was to implement a stand-alone Neural Network system which can operate, in principle, without intervention of a host computer. A stand-alone hybrid artificial neural network implementation (SHANNI) has been designed, built upon a few analog hardware building blocks. The digital part of the implemented system is built around a powerful Digital Signal Processor (TI TMS320C31-50). The DSP takes care of training the neural network by applying input and output patterns which are defined by the problem at hand, and continuously updating/refreshing the weights. This all can be done in a largely parallel manner, thus increasing the performance of the system. Since the analog network suffers from inherent noise, ways to determine and reduce the influence of this noise have been investigated, both theoretically and practically, through simulations on a host computer. The experimental results that have been obtained, indicate that noise does not deteriorate the system's significantly (regarding the training time) if the amount of noise is kept below a certain (practically feasible) level. The influence of noise on generalization performance and fault-tolerance is still to be investigated. Theories on this matter appaer to be very promising.
-39-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
A.W.M. Mathijssen 16 oktober 1997 Generation of phantomsound sources with Domain Adaptive Filtering dr.ir. P. Sommen prof.dr.ir. W.M.G. van Bokhoven
Block Frequency
Summary:
A phantom sound source is a virtual sound source generated by real sound sources. When listening to the real sound sources, the perception is that of a sound source placed elsewhere in the room. This is why the term 'phantom source' is used. In order to generate a phantom source, the distance and the direction of the phantom source have to be simulated by the real sound sources. This is done by filtering the signals of the real sound sources. Large filters are needed because the acoustic impulse responses are long. Also the systems need to be real-time and therefor the filters have to be implemented in an efficient way. It is difficult to generate a phantom sound source over a wide-frequency range. At higher frequencies, the wavelength is close to the size of the listeners head and the shape of the head plays an important role in the perception of the sound. Due to the development of new fast signal processors, the frequency range is enlarged. Recent research shows that it is possible to make phantom sources in a frequency range up to 16 kHz [1, 10]. The real-time systems for phantom sound source generation use the Active Noise Control theory to estimate the filter coefficients. The used adaptive algorithms are based on gradient search techniques. The filtering takes place in the frequency domain and the samples are calculated block by block (Block Frequency Domain Adaptive Filtering or BFDAF). The presence of an acoustic path after the adaptive filter, requires the use of an algorithm where the input signal is filtered with an estimate of this acoustic path. This algorithm is in literature referred to as the filtered-x algorithm. In this work, several schemes of filtered-x algorithms are described, tested, and compared. These filters are based on two different algorithms. The performance of the filters is measured while changing the properties of the filter (adaptation constant, number of input samples and the length of the Fast Fourier Transform) and after modelling of acoustic delay. Two systems for phantom sound source generation are implemented on a digital signal processing platform, one with loudspeakers and one with headphones. The results show, that the phantom sound source is well generated with the headphone but that there are some problems when loudspeakers are used.
-40-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
A.G. Mulders 28 augustus 1997 Design of a Sigma-Delta modulator for optical detector applications. ERASMUS I Universiteit Oulu prof.dr. T.E. Rahkonen, dr.ir L.K.J. Vandamme prof.dr.ir. W.M.G. van Bokhoven
Summary:
This work deals with the design of an analogue-to-digital converter for optical detector applications, which is based on a sigma-delta modulator. To reduce chip area and to improve linearity, the output current of a position-sensitive photodetector is directly offered to the sigma-delta modulator. This implementation requires continuous-time discrete-time mixed-mode circuitry. The designed third-order sigma-delta modulator converts its analogue input current to a single-bit digital signal. The system achieves a resolution of 14bit, for a stable input current range of.±. 90nA, giving a total input reduced noise current spectral sensity of 0.16pA/vHz at 1kHz. This equivalent to an input reduced noise current spectral sensity, of a transimpedance amplifier with a 650kn feedback resistor. The output signal of the position-sensitive detector is AM modulated, in order to limit DC offsets due to environmental light. The carrier frequency is 1kHz and the moduilating signal has a bandwidth of 200 Hz, so the signal band of interest reaches from 0.8kHz to 1.2kHz. The sampling frequency is 256 kHz.
-41-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.F.G. Pouls 13 februari 1997 Pre-echo reduction of transient in an audio coding system. dr.ir. P.C.W. Sommen, ir. A. Oomen (Philips), dr. R.J. Sluijter (Philips),dr.ir. R. v. Vleuten (Philips) prof.dr.ir. W.M.G. van Bokhoven
Summary:
Compression of digital audio signals can be realized by means of a subband coding system such as the one standardized in MPEG2. For a higher performance of this coding system, the coding gain can be improved for stationary like signals by doubling the number of subbands. This will require twice as long filterlengths under the constraint that the relative filter characteristics are unaffected. As a result of the longer impulse responses, the quantization errors made in the subbands will be spread more in time at the decoder side and cause pre-echos. For stationary like signals, the perceptible quality is improved as result of the doubled number of subbands, but for transient like signals the spreading of the quantization errors will degrade the perceptible quality of the coded signal. To improve the quality for transient like signals, four pre-echo reduction methods will be discussed. All the four methods were implemented and compared at equal bit-rate with the unmodified coder. By visual inspection of the pre-echos and by informal listening tests, the best of these four preecho reduction methods is selected.
-42-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
M. Rieck 28 augustus 1997 Skin cancer detection system, registration of moles in skin images. ERASMUS I Universiteit Oulu prof. Juha ROning/dr.ir. L.K.J. Vandamme prof.dr.ir. W.M.G. van Bokhoven
Summary:
The only cure for malignant melanoma (skin cancer) is early detection. Surgical removal of a still thin melanoma will result in a complete cure. In this study the first steps are made in the development of a skin cancer detection system. The Skin Cancer Detection System is a computer vision tool for physicians that automatically screens images of skin for changes that are suggestive for melanoma. The study features two main subjects: automatic registration of skin lesions (or moles) in successive skin images and the detection and measurement of changes within moles. The automatic registration can be defined as correctly labelling of lesions that represent the same mole in successive images and identifying lesions that have no corresponding lesion in the other image. The last ones are new lesions and could be skin cancer. Two moles that are labelled as the same mole are called a mole pair. For this registration a number of algorithms were implemented and tested, the best one of these finds the mole pairs correctly in 99% of the cases and needs two initial mole pairs and the rest of the mole pairs are found automatically. For the selection of these initial mole pairs an algorithm was implemented that finds in more than 99.2% of the cases three correct initial mole pairs. When these two algorithms are combined to form the total registration process, 98-99% of the cases the correct mole pairs are found. After registration the moles in the successive skin images that represent the same mole are checked for changes. For this purpose features of the moles are calculated that describe some characteristic of the mole in question. The features used here are specially tailored to recognize malignant moles. The moles of which these calculated features have changed over time are suggestive of melanoma. The moles that are identified by the registration process as new moles and the mole that are identified as changed after the comparison of mole features are indicated to a physician for further investigation. All algorithms that make up the Skin Cancer Detection System are implemented in the Khoros Scientific Software Development Environment.
-43-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M. de Rooij 28 augustus 1997 Realisation of a Neural Network, based on Coherent Pulse Width Modulation dr.ir. J.A. Hegt prof.dr.ir. W.M.G. van Bokhoven
Summary:
During this project, building blocks for a neural network have been designed, based on the application of Coherent Pulse Width Modulation. The circuits are meant for implementation in the MIETEC N-well O.Sum CMOS process. The two most important units of a neural network are the synapse and the neuron. This work deals with the electronic imitation of these two units. The synapse imitation is done by a four-quadrant multiplier, with an input consisting of a pulse with variable duration and a weighting factor between -1 and +1. The synapse has a low power dissipation (13.2uW), acceptable linearity of the output voltage versus weight, and excellent linearity in the case of output voltage versus input pulse width. Also, it is small in circuit size, and has a large weight input range (2V). The neuron imitation is done by an integrator/sample & hold part (relatively low power dissipation of 42uW, ability to adjust the circuit to the amount of connected synapses by changing the integrator capacitor), an inverse sigmoid part to realise a saturation in the neuron's response (shape can be adjusted), and a comparator part to generate an output pulse with a duration dependent on the comparison between the sample & hold circuit and the inverse sigmoid circuit (offset voltage better than 0.4mV, propagation delay time less than 15.6ns with a 1OpF load capacitance and less than 18.7ns with a 20pF load capacitance). The inverse sigmoid part has to be realised only once for the total neural network.
-44-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.A.H. Vogels 28 augustus 1997 Phase noise modelling of a PLL at 900 MHz using HDL-A. dr.ir. D. Macq (Mietec), dr.ir. L.K.J. Vandamme prof.dr.ir. W.M.G. van Bokhoven
Summary:
The investigations have been carried out at Alcatel - Mietec in Brussels.
The purpose of the work was to investigate the phase noise properties of the PLL working at 900 MHz, such as is used in GSM. Phase noise is an important design parameter, because if the level of phase noise exceeds a certain value, the bit error rate of the received signal will be too high, thus deteriorating the system's total performance. The noise properties of the PLL where examined using behavioural model written in HDL-A. This language makes it possible to obtain an efficient, accurate and relative fast model of the PLL. The noise was investigated in frequency domain as well as in time domain. The latter model can be used to examine the amount of spurious, which is generated as a result of the discrete character of the PLL. The most important conclusions are that the noise in the reference and at the filter output are the mayor contributors in the total output phase noise. If the loop is well designed the VCO noise has only a minor influence. The sampling of the noise causes aliasing, especially when the spectrum is white, this can cause for a noise amplification. Reference and filter noise can be reduced, by making use of different loop architectures.
-45-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoog leraar:
Th.H. Wakkermans 24 april 1997 Optical detection of surface-acoustic waves. dr.ir. L.K.J. Vandamme, dr. R.C. Woods (University of Sheffield) prof.dr.ir. W.M.G. van Bokhoven
Summary:
The project was done at the University of Sheffield, England, the centre for 111-V technology in the U.K. in an ERASMUS programm together with Eindhoven University of Technology. In a SAW-device the information is carried by an surface-acoustic wave (SAW) which propagates over a piezoelectric material. SAWs are easy to generate and detect in an electric way on piezoelectric materials with the use of lnterOigital Transducers (lOTs). For analysing a SAW device the different quantities of this acoustic wave have to be known. The aim of the project was to detect surface-acoustic waves optically on LiNb03 and GaAs. This was done by the optical detection of the height of the SAW which is of the order of several A. After a literature survey it was decided to use the surface grating technique for the optical detection of surface-acoustic waves. In this technique the SAW is used as a diffraction grating and diffracted laser light coming from the SAW-device has to be detected. The typical value for the acoustical wavelength varied from 28 to 70 urn. The implementation of this method resulted in an experimental set-up which has been used to detect surface-acoustic waves on LiNb0 3 and GaAs optically. Also the attenuation of a SAW and SAWs launched perpendicular from an lOT have been investigated. The optical detection of surface-acoustic waves was done by using a silicon photodiode combined with a high gain low noise amplifier as the detector in the experimental set-up. This turned out to be a very effective method. We have detected with this method for the first time SAWs on GaAs which is a promising material considering the lower acoustic speed and its optical characteristics.
-46-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R. T. Wilmans 28 augustus 1997 A Cellular Neural Network dr.ir. J.A. Hegt, dr. ir. D.M.W. Leenaerts prof.dr.ir. W.M.G. van Bokhoven
Summary:
This work is about the hardware design of a Full-range Cellular Neural Network (FR-CNN). Also in the report a method is suggested to find the boundaries of the basins of attraction (BOA) for 2-cell CNN's. A FR-CNN is a neural network consisting of identical neurons or cells with space-invariant templates, modified in such way that it operates conform the Full range model. This model has the advantage that the state of a cell is confined between certain values, independent of the template parameters through which neighbouring cells affect the state of the cell. A circuit is suggested to implement a cell of a FR-CNN. It is shown that the suggested circuit does not function properly and a modification to this circuit is made after which a well-functioning cell circuit is obtained. The boundaries of the basins of attraction of CNN's (not necessarily FR-CNN's) are the borders that separate regions in state space. These regions (basins of attraction) are the areas where a CNN converges to a specific equilibrium point. To find the BOA in a 2-cell CNN the Lyapunov energy function is more closely looked at and (with some restrictions) a force is introduced. Finally the BOA is found by solving the differential equations that describe the cells' behaviour.
-47-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.P.R. Compiet 12 juni 1997 Design of the preprocessing part a low power 100MHz, 8-bit, bipolar, folding Analog-to-Digital Converter. dr.ir. D.M.W. Leenaerts, ir. G.G. Persoon prof.dr.ir. R.J. v.d. Plassche
Summary:
The architecture of this 8-bit ADC is based on a three-stage conversion, using Cascaded Folding & Interpolating techniques. Compared to the other multi-stage ADC architectures, Folding & Interpolating ADCs are based on non-linear analog preprocessing the input signal. This architecture is an attractive solution for ADCs, as extremely linear circuit topologies are not required. In order to raise the resolution of Folding & Interpolating ADCs, without rising the number of parallel input stages or the number of comparators in the fine-comparator, a Cascaded Folding & Interpolating architecture is used. The ADC preprocessing part achieves a 55 dB Spurious-Free Dynamic Range (SFDR), while quantizing a 50 MHz full-scale input signal at 100 MSample/s. The ADC will be fabricated in an advanced bipolar IC process and the preprocessing part only dissipates 6 mW from a single 3.0 V supply. The preprocessing part consists of the fine folding circuit with input gain stages, reference ladder and bias circuits and the total coarse signal generation. This report is based on simulation results. Additional the layout of the preprocessing part has been extracted. For further research, the implementation of the folding ADC offers possibilities to scale down the power consumption when the bipolar process is better stabilized. The 'Nat.Lab' transistors parameters, used in the first simulations, should be better implemented in the IC-Iab in Hamburg. The total ADC power can be scaled down a factor 3. Also a two metal layer (or more) bipolar process, with small interconnect via's, reduces the total wirering capacitance and saves more power.
-48-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.M. Houet 12juni 1997 Design of a 12-bit, 250 MSPS digital-to-analog converter in 0.35m CMOS technology dr.ir. D.M.W. Leenaerts, ir. G.G. Persoon prof.dr.ir R.J. v.d. Plassche
Summary:
This report presents the circuit implementation and IC-Iayout of a 12-bit, 250 MSPS, digital-toanalog converter (DAC). The DAC is implemented in C75, a 0.35 pm, five layer metal CMOS technology. At the differential output of the DAC, loaded with two external son resistors, a fullscale differential voltage of 2VPP will be generated. The matching performance and transition frequency of CMOS technologies C75 and its predecessor C1 00 are analyzed. If the driving voltage V9 • - V1 (V91 ) of the transistors is low enough, both parameters are in favor of C75. Due to the low supply voltage of 3.3V, low driving voltages are inevitable and therefore it may be concluded that C75 is superior compared to C100. The DAC is based on a coarse-fine architecture with 6 MSBs (Most Significant Bits) and 6 LSBs (Least Significant Bits) driving the coarse part and the fine part respectively. This architecture is chosen based on calculations of sampling time uncertainties and DNL. The binary coded MSBs are encoded into a thermometer code by means of a segment decoder. Master-slave latches are implemented to synchronize the encoded MSB signals with the binary coded LSB signals. To reduce clock feedthrough buffers are added in between the master-slave latches and the current switches. The differential output signals of these buffers are used for driving the current switches. Depending on the state of the input bits the switches switch their corresponding current to the inverting or the non-inverting output of the DAC. Special attention is paid to the IC-package in which the DAC is implemented. The influence of the IC-package on the performance of the DAC is discussed. To improve the linearity performance of the DAC a special layout arrangement is required. Every bit- and segment current source is constructed by means of a certain number of parallel unity current sources. The dimension of the transistor used for one unity current source is derived from the DNL calculations. To cancel effects of process gradients across the chip, the unity current sources that make up a certain bit- or segment current source are uniformly distributed in a matrix. The same reasoning holds for the current switches. Every current switch is constructed by means of a number of parallel unity current switches, but are uniformly distributed in an array rather than in a matrix. Simulations of the DAC show a glitch energy of 2.5pVs in the single ended output signal. This result is not sufficient to obtain a glitch energy smaller than the energy of a half LSB. The settling time of the DAC to a 10-bit (!)accuracy equals 1.8ns. Because of the complexity of the complete circuit, it is impossible to simulate specifications such as S/(N+ THD)-ratio and SF DR. Measurements of test chips are needed to determine these specifications. The chip area is approximately 5 x 2 mm2 . The power consumption of the DAC equals 200 mW, this is without the contribution of the digital circuitry.
-49-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoog leraar:
E.J.F. Paulus 13 februari 1997 Ontwerp van een 200 MHz 9-bit folding AID-converter in 0.35 p CMOS technologie. prof.dr.ir. R.J. van de Plassche
Summary:
In this report the design of a 200 MHz 9-bit folding AID-converter in 0.35 p CMOS technology is presented. This converter is based on the folding and interpolation architecture. In order to operate at high frequencies, the AID-converter is optimized for maximum bandwidth. A differential input circuit with Track & Hold is being used in order to reduce harmonic distortion. The converter is divided into a 6-bit fine part and a 3-bit coarse part. For the fine part four parallel, eight times folding signals are generated by two interpolating folding stages. In order to avoid glitches in the output codes, a synchronization scheme is being used to synchronize the fine and coarse parts. The input frequency of the AID-converter is limited to 80 MHz (100 MHz for 8-bit performance) due to harmonic distortion. The settling time of the analog preprocessing and the digital encoding are fast enough to allow a clock frequency of up to 200 MHz.
-50-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar:
J. Verlinden 28 augustus 1997 A resistor string D/A converter with true 16 bit performance. prof.dr.ir. R.J. van de Plassche
Summary:
In this report a multibit I.d D/A converter with true 16 bit performance is described. In a traditional 1 bit I.d D/A converter integral linearity is always guaranteed. The price to pay however is a high oversampling ratio and many high frequency components. To overcome these problems a multibit Ia D/A converter can be used. The multibit Ia D/A converter proposed in this report is a 10 bit resistor string D/A converter with 16 bit performance. The resistor string must have a linearity of 16 bit to make a 10 bit resistor string D/A converter with 16 bit performance. However the highest possible linearity of a resistor string integrated in a standard CMOS process is only 11 to 12 bit. The system proposed here to increase the linearity, is based on digital correction. To achieve a 16 bit linearity the digital input has to be corrected in such a way that the output of the resistor string D/A converter gets a 16 bit linearity. A dynamic range of 16 bit can be achieved by oversampling and noise shaping. By measuring the resistor string a matched model of the resistor string is made. With help of this model, the digital input will be changed giving a 16 bit linearity. The best way to calculate the tap voltages of the resistor string is to measure the voltage across a row of resistors. After measuring the total number of resistor rows, the tap voltages can be calculated. The resistor rows will be measured using a Ia AID converter with a fully differential switched capacitor loopfilter. The digital correction will be on-line with a calibration time of approximately 1.5 seconds.
-51-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.H.T.J. van Wegberg 28 augustus 1997 Designing a 8-bit 100 MHz lowpower folding Analog to Digital Converter dr. D. Leenaerts, ir. G. Persoon prof.dr.ir. R.J. van de Plassche
Summary:
Todays consumer electronic products, like for instance mobile phones, usually require the use of high speed data convertors. Subsequently, the use of low power convertors in such products which can be supplied by battery becomes increasingly important. The Analog to Digital Convertor (ADC) discussed in this research report is considered to be an example of such a product. In order to meet the demand for low power, this ADC has been implemented in an advanced bipolar ICprocess in which no substrate is used. The former leads to very low parasitic capacitances around the transistors, providing the ideal circumstances for designing low power circuits. In this research, the design as well as the lay-out of the ADC have been realized. In order to make implementation of the ADC in a digital signal processing system mentioned above, possible, the corresponding static and dynamic specifications which an ADC needs to meet have been analysed and discussed in this research report. Furthermore, attention has been payed to the architecture of the ADC which has been based on the folding and interpolation technique. Applying this technique has resulted in using less comparators since they are used in a more efficient way compared to for instance to a full-flash convertor. Hence, fewer comparators use less power consumption and take in less chip area. The folding and interpolation technique used in this ADC has been discussed in this research report, including the demands for a correct functioning of the ADC. The interplation has been performed by using a interpolating resistance ladder, presenting the interconnection between the analog preprocessing part and the digital part of the ADC. By using charge control the resistor value has been determined, resulting in an optimum between clock feedthrough noise at the output of the interpolation resistor network and power dissipation of the analog preprocessing part. Simulations of the ADC have turned out that the design specifications mentioned above have been met adequately except for the power dissipation of the ADC which turned out to be a factor 2 to 3 higher than the initial design specifications of 6 - 9 mWatt. The corresponding lay-out realiation of the ADC started by searching for a compact lay-out for the comparators. In addition, the transistors which create the sample and hold circuit have been placed closely together to prevent additional interconnection capacitances being added to the sample and hold circuit in order to attain the designed BER (bit error rate). The total chip area of the ADC is 4 mm*4mm due to the limited numbers of metal layers that could be used during this research and the corresponding design rules of those metal layers.
-52-
VAKGROEP MEET & BESTURINGSSYSTEMEN
-53-
LEERSTOEL METEN EN REGELEN
-55-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar:
A.P. Jong Tjien Fa 24 april 1997 Closed Loop Identification in the Process Industry prof.dr.ir. A.C.P.M. Backx
Summary:
Closed Loop Identification in the Process Industry. Allthough open loop identification yields more accurate results in the sense of obtaining an optimal process model, closed loop identification is preferred for various reasons. For instance for necessity (in case of unstable processes), validity (in case of nonlinear processes) and performance (in case of optimal control). The problem with Direct Identification using closed loop data is that both the process input and output signals contain a noise term due to feedback. This results in extra bias in the estimation. Closed loop identification methods transform the closed loop identification problem into two open loop identification problems. The process model is then determined by both identification results. Common closed loop identification methods are Indirect Identification, Coprime Factorization, Dual Youla-Kucera and the Two Step Method. The Two Step Method (TSM) is preferred for several reasons. It is a straightforward solution in bypassing the closed loop, it is compatible with software used at Aspen Tech, knowledge about the controller is not necessary. In the first step of the TSM, the sensitivity transfer is determined. With this transfer, which incorporates the closed loop dynamics, a noise free process input signal is constructed. In the second step the process transfer is determined with direct identification between this noise free input signals and the output signal. Because errors in the sensitivity transfer estimate directly influence the accuracy of the process estimate, the first step in the TSM has to be performed carefully. If a bias error is present in the sensitivity estimate, it may be corrected using identification by factorization. The noise-free process input signal is used as an intermediate signal. The D-factor contains the errors made in the first step. Correction by the D-factor must be considered carefully since for accurate estimates of the sensitivity transfer the D-factor contains noise characteristics. In this case correction yields inferior models. The accuracy of a model is mostly measured in terms of variance in the estimation. To create an optimal signal te noise ratio in both steps, it is necessary to use two different data sets that each provide the signal to noise ratios needed. This thesis mostly treats the Two Step Method. Bias and variance propagation from the first step to the second step are regarded. Also improvements of the TSM are proposed using Identification by Factorization anbd the Double Excitation Method (OEM). The first is used to determine the model complexity. The second creates an optimal signal to noise ratio for the TSM.
-56-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
V.M.G. van Acht 12 juni 1997 Haalbaarheidsonderzoek naar self-sensing magnetische levitatie. dr.ir. A.A.H. Damen prof.dr.ir. P.P.J. v.d Bosch
Summary:
At the Eindhoven University of Technology research is carried out at a three dimensional laser interferometer, which can be used to measure the position of an object. (For example the tool centre point of a robot). In the laser-interferometer a mirror is used to direct the laserbeam onto a retro-reflector on the object to be measured. The position and orientation of this mirror must be controlled extremely accurate and fast. One way to realise the laser deflection system is to use magnetic bearings. Magnetic bearings have the advantage that friction is extremely low, tracking can be extremely accurate (in principle) and hardware is cheap. Magnetic bearings use several electromagnetic coils to exert positioning forces onto the freely levitated ferromagnetic mirror. One way to obtain the position and orientation of the mirror (necessary for the position control system of the mirror) is to measure the inductances of the same coils which are used to levitate the mirror. This is called self sensing magnetic leviation. Self sensing magnetic levitation has the advantage that no additional position sensors are necessary. Therefore it can be cheaper and smaller. In this master of science thesis a pilot project for the magnetic levitated mirror is discussed. A steel ball is to be levitated and controlled in the two directions of the vertical plane by four coils. Position of the ball must be obtained by measuring the inductances of the coils. First the magnetic, electrical and mechanical parameters of the levitation system are measured. After that, the equations of a one-dimensional magnetic levitation system are derived for current controlled coils and voltage controlled coils. Then the two different actuators (current source and voltage source) are examined in detail. It will be shown that, in contrary to a voltage source, a current source is very likely to oscillate and suffers from a lot of output voltage noise when loaded with a coil. Next, five different ways to measure the inductances of the coils are discussed and examined in detail, and the best actuator/sensor combination is chosen. It will appear that this is a voltage controlled current source with an additional HF-component to measure the inductance of the coil. After that, a controller is designed for the magnetic levitated ball, using the previously chosen actuator/sensor combination, and simulations with the proposed controller are done. Finally the designed actuator and sensor are tested in practice, and recommendations are given regarding the magnetic levitated mirror system.
-57-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. Bakker 24 april 1997 Synchronisatie van mailingmachine modulen m.b.v. voordelige, lage resolutie pulsgevers. ir. R.J. Gorter prof.dr.ir. P.P.J. v.d. Bosch
Samenvatting:
Bij Buhrs in Zaandam worden machines voor de grafische industrie gefabriceerd. Dit worden ook wei mailingmachines genoemd. In deze machines worden koppelingen tussen modulen op mechanische en elektronische wijze uitgevoerd. De elektronische koppelingen zijn het meest flexibel, echter toepassing voor de hele machinelijn is te duur. Buhrs wil om economisch verantwoord aan de wensen van de klanten tegemoet te kunnen komen, een "low cost" elektronische koppeling toepassen. Hiertoe is een methode bedacht. In dit verslag wordt onderzocht of met die voordelige methode de modulen van de mailingmachine, die nu nog mechanisch gekoppeld zijn, van elektronische koppelingen kunnen worden voorzien. Er zijn functioneel gezien twee typen modulen te onderscheiden. Dit zijn de transportmodule en de bewerkingsmodule. De transportmodule bestaat uit een nokkenkettingsysteem, waarmee de produkten door de machinelijn getransporteerd worden. Dit is de master van de machinelijn. De andere modulen zijn bewerkingsmodulen; dit worden ook wei de slave's van de machinelijn genoemd. De aandrijfas van een module maakt een omwenteling per cyclus, deze heet darom de cyclus-as. Er moeten dus meerdere slavecyclus-assen aan een mastercyclus-as gekoppeld worden. De positie of fase van de cyclus-as wordt met een rotatiesensor bepaald. Bij een elektronische koppeling volgt de slaveregeling van de fase van de master. Het volgsysteem is hiertoe uitgerust met een sensor, een discrete PI regelaar, een snelheidsfeedforward en een Anti windup. De kosten van de elektronische koppelingen kunnen verlaagd worden door de sensorresolutie van de slave's te verlagen. Hiermee wordt echter de meetruis vergroot. Dit is uniform verdeelde kwantisatieruis, die vooral het statische volggedrag nadelig beTnvloedt. Dit nadeel kan omzeild worden door de regelmethode aan te passen. Deze aangepaste regelmethode wordt de Buhrsmethode genoemd. Deze methode wordt ook wei de asynchrone methode genoemd, doordat de regelfrequentie niet synchroon met een vaste klokfrequentie loopt. Deze methode maakt gebruik van de hoge resolutie van de mastersensor en regelt op de tijdstippen dat slavefase een vast deel heeft afgelegd. Dit impliceert dat de regelfrequentie met de slavesnelheid verandert. Bij deze methode wordt de kwantisatieruis van de slavesensor geelimineerd, dit zal vooral statisch een beter volggedrag opleveren. De prestaties hiervan worden onderzocht door deze te vergelijken met de niet aangepaste conventionele methode. Deze methode beschikt over een vaste regelfrequentie die synchroon met een vaste klokfrequentie loopt, daarom wordt deze methode ook wei de synchrone methode genoemd. In simulaties met een feedermodule als slave belasting, wordt aangetoond dat de asynchrone methode beter presteert dan de synchrone methode. Asynchroon geeft een beter volggedrag te zien, echter de synchrone methode geeft een beter inschakelgedrag te zien. Een combinatie van deze twee methoden wordt de hybride methode genoemd. Bij lage snelheid wordt de conventionele methode gebruikt. Boven een bepaalde snelheidsdrempel wordt de Buhrsmethode gebruikt. De hybride methode is een goede oplossing, die de beste eigenschappen van beide methoden combineert. De asynchrone methode heeft voor het koppelen van de feeder aan de masterreferentie in steady state een sensorresolutie van 2 pulsen per omwenteling nodig. Daarbij wordt tijdens het inschakelen de toegestane marge overschreden, zodat voor koppeling binnen de marge voor aile situaties de sensorresolutie van 8 pulsen per omwenteling de beste oplossing geeft. Nu voldoet de asynchrone methode altijd en is de hybride methode overbodig geworden.
-58-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.L.A. Bierman 12 juni 1997 Gamut mapping algorithmen voor kleurenprints ir. Y. Boers, ir. C. Gerrits prof.dr.ir. P.P.J. v.d. Bosch
Samenvatting:
In dit verslag wordt het probleem van kleuren die niet 1-op-1 gereproduceerd kunnen worden op een printer besproken. Eerst wordt een inleiding gegeven op het gebied van de algemene problemen bij kleurreproductie. Een aantal algorithmen, die (een aantal van) de problemen aanpakken, wordt besproken en geanalyseerd. De tests die hiervoor nodig waren, zijn uitgevoerd op een digitale copier van Oce Technologies B.V. Voor de verschillende klassen van reproducties zijn de meest optimaal presterende algoritmen geselecteerd uit degene die in dit verslag beschreven zijn. Dit zijn voor colorimetrisch printen het gewogen AE clippen, voor fotografisch printen ook het gewogen AE clippen en voor 'bussiness graphics' printen het elastische compressie algoritme. Voor combinaties van deze drie is het elastische compressie algoritme goed.
-59-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.N. Bohlander 13 februari 1997 Simulation of the linear complementary-slackness class of hybrid systems dr. S. Weiland, ir. W.P.M.H. Heemels prof.dr.ir. P.P.J. v.d. Bosch
Summary:
This report deals with hybrid systems. Hybrid systems contain both continuous and discrete or logic elements that interact with each other and influence the dynamical behaviour of the total system. Unfortunately, little is known about how to analyse a hybrid system in a general way in order to obtain information concerning dynamical behaviour. Though analysis could be very useful when designing hybrid systems or controlling them. Therefore hybrid systems get serious attention recently. Research has started, doing time-analysis to get insight in the systems behaviour. As a starting point, to write this report, the theory described in sec. 2.1 was used. The authors of [1], Schumacher and Van der Schaft, established a way of describing hybrid systems of the complementary slackness class. The idea was to define different sets of differential and algebraic equations, DAEs. Each DAE corresponds to a mode in which the system operates depending on the active constraints that belong to this mode. To stay in a certain mode, its inequality conditions have to be satisfied. When an inequality condition is violated, it is necessary to switch to another mode where continuation is possible, sometimes accompanied by state-jumps. Rules are given for determination of the sets of DAEs related to the different modes and the way to jump from one mode to another. In [3], a method for selecting modes from the systems state-vector is introduced. The theory about the linear complementary slackness class of hybrid systems is used to develop a simulation program that can compute the systems state-trajectories in Matlab. The resulting trajectories provide useful information concerning the systems dynamical behaviour. This report will evaluate the theory set up in the articles [1] and [3]. Subsequently, the implementation of the theory in a simulation program running in Matlab is explained. Numerical inaccuracies in Matlab cause erroneous trajectory computation. A method avoiding problems concerning numerical inaccuracies is proposed. Finally, two examples are given to illustrate the programs simulation results.
-60-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
E.B.G. Bras 11 december 1997 Identification and robust control of a laboratory-located process. dr. E. Bayens Lazaro prof.dr.ir. P.P.J. v.d. Bosch
Summary: The project deals with the control of a laboratory-located MIMO process. Two controllers are developed, namely a LQR controller (Linear Quadratic Regulator) and a Robust controller. To be able to develop a controller, a process model is estimated, which covers the main part of this essay. To perform any system identification and to implement any controller, a flexible-graphical software program has been written, that easily performs the above tests. Physical laws are used to derive a model structure, to overcome the problem of choosing the correct model structure for model identification. This method is also known as physical parameterized modelling. The model structure contains known and unknown parts that are identified using a one-stepahead prediction. A regularization technique is used, to restore the ill-conditioned Jacobian, due to the poor influence of the identification parameters on the model output. An open-loop identification is performed of the final closed-loop system (including the controller). An open-loop identification of the closed-loop has been chosen, to take a closer look at closed-loop identification and because the process is poorly damped. A non-linear and a linear discrete process model are estimated. Because the non-linear model cannot be used to develop our controllers, we use it as a way to test the derived model structure. The non-linear model is obtained by performing a direct identification. That means that the process inputs are used as the identification inputs, instead of the reference inputs of the closed-loop system. The identification parameters of the linear model are retrieved from the identified closed-loop model, by making use of the known controller model. A Binary and Generalized binary noise signal is used as test-input signal. The a-priori information, needed to design these signals, is extracted from a calculated process model, were a good guess is made of the identification parameters. The estimated non-linear and linear model are validated by simulation, residual analysis and a scalar error measurement, all of them indicating a small model error. The LQR controller, used to perform the "closed-loop" identification is based on the above calculated model. With the estimated process model, a (new) LQR controller and a Robust controller are developed, such that the closed-loop is asymptotically stable and that the performance requirements are met as good as possible. The performance requirements consist of a (well-known) zero tracking error, small overshoot and a "fast" step response with no actuator saturation. The measured process outputs correspond to the states of our process model, such that no observer is used. This results in a robust stability margin. The identification parameters have a physical significance, which can easily change through changing, for example a process valve. The possible parameter uncertainties are well defined, such that a robust controller is developed, that can deal with these uncertainties in an explicit manner. The pi, theory is used to develop our controller, such that we can guarantee pre-defined controller objectives, regardless of the parameter uncertainties. The controller is calculated by assuming a full complex uncertainty matrix (unstructured uncertainty), such that "simple" algorithms can be used to calculate the controller. This leads to conservative results, as we are dealing with structured uncertainty, leading to a diagonal uncertainty matrix. The conservatism results are reduced by keeping the dimensions of structured uncertainty matrix as small as "possible". This can be achieved by choosing only uncertain parameters (efficient parameters) that have a considerable influence on the output. The regularization technique, used during identifications, is used to select the efficient parameters. An uncertainty state-space presentation is introduced, to formulate our robust controller problem in a J. framework. It handles uncertain parameters that appear in a state-space presentation in an easy way, and can easily be implemented in Matlab by using the p-Analysis and Synthesis Toolbox. It is shown that the closed-loop performance deterioration of both controllers is small in case all uncertain parameters undergo a +20% parameter change. This is not that surprising, as during identification we already noticed that most identification parameters have only little influence on the process output, which was our motivation for using a regularization technique.
-61-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.C. Chao 24 april 1997 The lateral control of a vehicle dr.ir. A. v.d. Boom, ir. D. de Bruin prof.dr.ir. P.P.J. v.d. Bosch
Summary:
This thesis is part of the research on the automation of the hybrid bus control. The hybrid bus is a future bus which will consist of three carriages with in total eight independent controllable wheels. This thesis focuses on the modeling and controlling of one carriage of the hybrid bus which is denoted as the simplified vehicle. The simplified vehicle is as a vehicle with only the most elementary properties. It is a rigid body with four wheels from which the wheels are considered as abstract. A non-linear mathematical model of the simplified vehicle is obtained from the study of the literature on the field of automotive engineering. The non-linear mathematical model describes the motion of the vehicle as the response to the application of control inputs. And therefore it is suitable for simulating a broad range of maneuvers with the vehicle. However, the non-linear model is not suitable for the design of the lateral controller. The lateral controller which is designed in this thesis, is based on the classical control theory, hence the designed controller is a PID-controller. The controller's task is to steer the simplified vehicle to follow a reference trajectory by correcting its lateral deviations with respect to the reference trajectory. The non-linear model can be linearized with the assumption that the vehicle is following the reference trajectory with only small lateral deviations. Hence, the lateral controller design is based on the linear model which is obtained from the linearization of the non-linear model. With the design of the controller, problems with respect to variations in the vehicle parameters are encountered. Each possible configuration of vehicle parameters gives the simplified vehicle different dynamics. Hence no uniform controller can be developed. Therefore, instead of a controller, control strategies are developed for each possible configuration of vehicle parameters when the simplified vehicle is traveling with velocities of less than 45 km/h. With the aid of the control strategies, concrete controllers can be obtained for each specific configuration of the vehicle parameters. The linear and non-linear mathamatical vehicle models are implemented in the simulation software packages 'Psi" and 'Simulink'. Hence all the analytical results in this thesis are simulated in either one of the simulation environments.
-62-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Afstudeerhoogleraar:
R.H.P. Coenen 12 juni 1997 Possible energy savings at Aircraft Fuel Supply, Schiphol prof.dr.ir. P.P.J. v.d. Bosch
Summary:
This report deals with the possibility of future energy savings at Aircraft Fuel Supply (AFS) at Schiphol Airport. When an amount of kerosene is demanded at the piers at Schiphol, AFS takes care of it. Reviewing energy bills is always the first step in making a good energy evaluation. From this evaluation it is concluded that AFS has paid too much in 1996 and that the pumps are the main energy users (86%). So the pumps at AFS were evaluated. By keeping in mind the main policy of AFS, delivering a certain flow at any time at Schiphol, it turned out that the best way of saving energy is reducing the produced head of the pumps. The best way to do this, is changing the speed of the pumps. This can be accomplished by Variable Speed Drives. The best solution with respect to rangeability, speed, control accuracy, and maintenance is an implementation of Variable Frequency Drives (VFD's). By calculating the savings and investment costs and keeping in mind taxes, government subsidies, additional advantages and future perspectives, it is recommended that AFS should install two Variable Speed Pumps by using VDF's.
-63-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoog leraar:
H. v. Dijk 24 april 1997 Modelling and control of an inverted pendulum on a cart dr.ir. A.J.W. van den Boom prof.dr.ir. P.P.J. v.d. Bosch
Summary:
A cheap and simple system to balance a stick on a cart is being developed. The construction of an existing system has been improved and the performance of the new construction has been evaluated. The main problems are the occurrence of slip and limit cycles. The main cause of the latter is the worm wheel transmission. White-box models have been derived for the position of the cart and the angle of the inverted pendulum. With the results of black-box modelling the unknown parameter values of these models have been estimated. The resulting models have been used to construct a simulation model of the complete system. This simulation model has been very useful in testing controllers and predicting the effect of disturbances on the system behaviour. A PID controller has been designed to balance a stick of one metre length. Adding a proportional control action on the position makes it possible to balance the stick and keep the cart within one metre of the initial position. Experiments have been done with smaller sticks which show that with the system a stick of 30 em is balanced and that the cart is kept within one metre of its initial position at the same time. From simulations a way has been found to control the position of the cart by changing the offset of the angle signal. Experiments confirm this.
-64-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.C.K. v. Doorn 16 oktober 1997 Floating Platform control, based on black and white neural states. dr.ir. A.A.H. Damen prof.dr.ir. P.P.J. van den Bosch
Summary:
The floating platform is a mechanical construction consisting of a triangular body with three floats attached to it. This platform lies in a squared rub filled with water. On this platform, a crane has been mounted, which acts as a disturbance on the system. With the help of three servo systems it is possible to change the height of the three pillars, which is the connection between the platform and the floats. The main goal of this thesis work is to develop a controller which should meet our requirements. Generally this means that the controller should be able to suppress the wave disturbances, acting on the platform as quick as possible. To estimate a proper nonlinear model for using it in a model based controller, and for developing such a model based controller, we tried this by use of neural networks. These networks are able to capture nonlinear behaviour. We used a state-space structure, implemented in a multy layer perceptron. Singular value decompositions and loss functions showed us that a state dimension of seven was the best choice. The estimation results were satisfying, although the model predictions were biased. This bias is caused by standing and/or reflected waves and behave chaotic. This means only in a small horizon after excitation these waves are correlated with the input signals, strongly dependent on initial conditions. We tried to include these waves in the model by means of a dynamic nonlinear filter, for short horizon predictions. Singular value decomposition of the output error, showed us that the waves can be modelled by a fourth order model. Two methods have been used: one structure with fixed output error, fed back to the inputs and one structure with variable output error, which should be minimized also. After some stability problems, the first method gave us good results, the bias disappeared almost totally. The second method has not produced any good results yet, due to some problems with the gradient calculation int he quasi-Newton optimization procedure. Finally we can conclude that neural networks applied in a dynamic state-space structure are very usefull for modelling nonlinear dynamical systems.
-65-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.J. Engelaar 28 augustus 1997 Modeling and controlling an articulated vehicle. ir. D. de Bruin prof.dr.ir. P.P.J. van den Bosch
Samenvatting:
Dit rapport beschrijft het modelleren en regelen van een gelede bus. Het eerste hoofdstuk geeft een semi-statische beschrijving van een voertuig bestaande uit een tractor en twee semi-trailers. Daarbij wordt aandacht besteed aan de verhouding tussen de banen die de afzonderlijke wagentjes beschrijven. In het tweede hoofdstuk worden de bewegings-vergelijkingen voor het beschrijven van een bus bestaande uit een tractor en een semi-trailer afgeleid. De tractor-semi-trailer combinatie heeft drie afzonderlijk bestuurbare assen. In het dynamische model wordt rekening gehouden met bandeigenschappen en de onderlinge be"invloeding van de tractor en semi-trailer. Hoofdstuk drie beschrijft het ontwerp van de regelaars. Er worden drie regelaars ontworpen, namelijk een voor iedere stuuringang. Tenslotte zullen de eigenschappen van het complete systeem worden geevalueerd door middel van simulaties.
-66-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.B. van Leeuwen 28 augustus 1997 Electromagnetic flowmeters dr.ir. A.J.W. v.d. Boom, dr.ir. P.M.M. Bongers (Unilever) prof.dr.ir. P.P.J. v.d. Bosch
Summery:
Companies like Unilever deal with enormous amounts of liquid products. During the fabrication process, often very complex systems are necessary for filling the packages (bottles, tubes, boxes, etc.). An important part of the filling system is the part that controls the amount of product, put in the package. Due to the high speed, the amount of fluid will differ. This variation is counted for by means of an additional amount of product. In order to reduce this give-away Unilever is interested in the possibilities of including flowmeters in the system. Electromagnetic flowmeters are very well suited for this task due to their robustness and hygienic properties. The objective of this study is to get more insight in the behaviour and the performance of electromagnetic flowmeters for volume flow measurement. To achieve the objective, the following has been done. For one broad class of electromagnetic flowmeters a model has been derived. This model is able to simulate the behaviour of different configurations of the flowmeter for different velocity profiles. The model can also be used to investigate the relative performance between these configurations. This report proves that an estimation of the relative improvement between different configurations can be made based on simulations with the model. In this report, trends are identified concerning the improvement of performance between different configurations of the flowmeter. Besides volumetric flow measurement, this report briefly looks at the application of electromagnetic flowmeters for the measurement of rheological parameters of fluids.
-67-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.L.M. Looymans 24 april 1997 H-co control of the air gap of a laser deflecting system. dr.ir. A.A.H. Damen prof.dr.ir. P.P.J. v.d. Bosch
Summary:
A laser tracking system is used to track the tool center point of a robot with high accuracy, even when the robot is moving at high speed. To achieve this, a laser deflection system based on an air bearing was developed, which results in a very low friction. An incoming laser beam is pointed at the center point of a mirror. The mirror is beared on an air cushion and can be tilted by the forces of three actuators. By doing so, the incoming laser beam can be deflected in the proper direction. The angles of deflection can be measured, just as the height of the air gap. The angle sensors should be calibrated in future. The air gap height sensor circuit is examined and recommendations are given to improve this sensor. It was calculated that the equilibrium air gap height is between 32 and 42 pm. For the accuracy of the system, it is important to keep the center of the mirror surface at a constant position. Therefore the height of the air gap has to be constant. Variation of the air gap height has negative influence on the deflection of the laser beam, and in particular on the meaurement of the deflection angles. A model based on ODE's is derived for the air gap height system, and also a frequency response measurement of the system is carried out. From this, a model was chosen with which a H co-controller was designed. Connected bottlenecks in the H codesign are measurement noise and bandwidth. Due to the air gap sensor noise, the bandwidth of the controlled system can not become larger than 20 Hz, while the ultimate goal is to reach a bandwidth of 300 Hz. The designed controller was simulated with Simulink and from these simulations it can be concluded that the actuators are not saturated and that the disturbance reduction corresponds to the expectations from the controller design. The designed Hco-controller was tested on the real system using a DSP system. The disturbance reduction of the real controlled system does not correspond to the results of the simulations, due to noise introduced by the DSP system. This problem can however be solved easily.
-68-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. Nauta 16 oktober 1997 Using PC-soundcards for measurement and control applications ir. B. Hultermans, ing. buro v. Mierlo prof.dr.ir. P.P.J. v.d. Bosch
Ajst-u ~ u<-11~~ '~ ncxvtl rtu.6/; ~J,
Summary:
b ~~ 'Jjj
In the education of control systems, demonstrations are frequently used to give students an insight in the actual behaviour of the designed and calculated control systems. The test stands used for these demonstrations are also used for practical training, but to give all students the opportunity to work with it, multiple test stands are required. Looking at the specifications of PC-soundcards, using a PC with such a soundcard can be an interesting and low-cost option. The hardware specifications of PC-soundcards indicate that there is one major problem to use it for measurement and control applications: the frequency range does not expand to OHz. Several methods are proposed to overcome this problem, of which the major ones are: using hardware modulation/software demodulation for the input, and Pulse Width Modulation for the output. A search for software interfaces has indicated that when standard registers are used of SoundBiaster Soundcards, no problems are expected controlling SoundBiaster compatible cards. The "SoundBiaster compatibility" also turns out to be the only useful standard for PC-soundcards. The only "official" standard that is found is the Multimedia Personal Computer (MPC) standard. This standard describes the Multimedia PC and its components, and is not very detailed. It does describe sampling frequency, number of bits for ADC/DAC and data transfer, but no accuracy, frequency-range, or software interface. The investigations of modulation techniques in hardware and demodulation techniques in software have indicated that amplitude modulation is the most convenient way to cross the DC-blocking. After a mathematical treatment of the modulation/demodulation, including imperfections, a choice is made for a specific type of modulation, the so-called chopper modulation. The chopper-modulator requires only basic electronics for the hardware modulator, and imperfections as compared to the ideal amplitude modulation can easily be coped with in software. The PWM output interface is basic, and rather insensitive for DC variations as caused by the soundcard. The hardware interface that is designed is straight forward, and does not require an external power source. Only when power has to be delivered to an actuator, external power sources are needed. A software program is designed that has basic functionality to control the soundcard, perform the demodulation and filtering, and generate the PWM signal. The measurements that were performed indicate that the combination of soundcard, hardware interface and software, running on a 100 MHz pentium, can reach controller update-rates of 2200 Hz. Higher update-rates can be accomplished, at the expense of coarse PWM signals. Optimal performance is reached at 1000 Hz. For pure measurement-applications update-rates of 5500 Hz can be used.
-69-
fu /
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.A. Noten 24 april 1997 Controlling of a laser deflecting system dr.ir. A.A.H. Damen prof.dr.ir. P.P.J. v.d. Bosch
Summary:
In this report a model by Differential and Algebraic Equations (DAE) describing the dynamics of a laser beam deflecting system, rotating simultaneously and independently in two directions, is derived. For the deflection of the laserbeam a mirror is used consisting of a steel semi sphere which can rotate on an air bearing. the angles the mirror can rotate are controlled by three actuators. The dynamics of the system are derived in terms of the normal angles a and p and the parasitic rotation around the normal tfJ'. The angles a and p define the mirror's unit normal and consequently the deflection of the laser beam. So the angles a and p can directly be used to control the deflection of the laser beam. The rotation tfJ' around the normal doesn't affect the deflection, but is is caused by the mechanical behaviour of the mirror system. Classical numerical integration software isn't able to solve th DAE equations. For simulation of the mirror system, an index three DAE system, the backward difference based code DASSL was available. The results are accurate and obtained much faster than results obtained by simulations of a comparable model in Simulink. A linearized model of the system is also derived. This linear system is used as a basis for an LOGcontroller that controls the angles a and p. The rotation around the normal tfJ' however can't be controlled directly. The LOG-controller, a combination of LO-controller and observer, is then used as a controller for the nonlinear mirror system. The observer is needed because not all of the required states are available in the real system (only the angles a and p are measured). From simulations followed that a high state-feedback gain was necessary to decrease the influence of the rotation tfJ' around the normal on the behaviour of the system to an acceptable level. To be able to use this large feedback gain a saturation element is taken into the design to protect the actuators against large peaks in the control input. To eliminate the large overshoot in the system output when using broadbanded signals as reference inputs these inputs are rate limited and filtered. Furthermore the model was checked on its consistency with respect to the choice of the coordinate system. From simulation it appeared that the model behaviour is effectively independent on the coordinate system.
-70-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
A v. Zijl 28 augustus 1997 Asynchronous motor control ir. W. Heemels, ir. M. Vonder (Buhrs Zaandam) prof.dr.ir. P.P.J. van den Bosch
Summary:
This report was written as part of a project that was carried out to obtain the Master of Science degree from the Faculty of Electrical Engineering at the Eindhoven University of Technology. This project was carried out at the Measurement and Control section of the group MBS of this faculty. The company Buhrs-Zaandam was a participant in this project. Buhrs-Zaandam makes so-called mailing-machines, these are machines that can automatically gather and package all kinds of printed material so that these packages can be put to post. These machines are driven by electrical motors. The aim of the project and the subject of this thesis is to develop a means of synchronising the different motors of these mailing-machines. An important aspect is the cost. As several of these synchronisation means are needed in one machine, the cost should be low. Conventional synchronisation systems are too expensive. The main expenses for such a synchronisation system lie in the cost of the sensor that measures the position of a motor. These sensors are usually of high resolution, allowing typically 1000 position measurements per revolution of the motor axis. In this thesis it is investigated, whether a control scheme can be developed wherefore a position sensor with lower resolution is sufficient. The aim is to use only a few (1-1 0) position measurements per revolution of the master motor. First it is investigated, that for conventional control schemes, this resolution is not sufficient. The performance for a standard PI and a H co-controller are investigated. Next two different control schemes are developed, which are especially tailored for low resolutions. These controllers rely on the fact that they work asynchronous in time instead of having a fixed sample rate. The difference between the two controllers is that one measures asynchronously in time and updates its control action with a fixed sample rate (synchronous). The other is fully asynchronous. These two controllers answer to the wanted specifications, even for very low resolutions. Typically 1 or 2 measurements per revolution of the motor axis are sufficient. Both controllers have their own (dis)advantages. The final choice will depend on the ease of implementation of both controllers.
-71-
LEERSTOEL MEDISCHE ELEKTROTECHNIEK
-73-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.P.T. Chin Kwie Joe 12 juni 1997 A prototype expertsystem for analysis and diagnosis of single chamber paced ECGs dr.ir. P.J.M. Cluitmans, dr.ir. J.A. Blom prof.dr.ir. P.P.J. v.d. Bosch
Summary:
A rule-based expert system has been proposed that will assist clinicians in the often difficult task of analyzing multi-channel pacemaker electrocardiograms (ECGs). Analysis of the pacemaker ECGs is important to the follow-up evaluation of patients with implanted pacemakers. Because of the complexity and variability of pacemaker algorithms, diagnosis of pacemaker ECGs is often considerably more difficult than the interpretation of usual ECGs. However, comparatively little work has been done in this area, mainly because the diversity and complexity of pacemaker logic makes interpretation a difficult task. The proposed prototype expert system on interpretation of the pacemaker ECG can provide great clinical benefit because few clinicians are adequately trained in the diagnosis of such ECGs for the interpretation of pacemaker functionality. The expert system guides the clinicians through the analysis of the pacemaker ECGs during a follow-up. The program bases its conclusion on information it receives from the user's response to questions the system poses regarding specific characteristic of the ECG waveform during multiple cycles. The clinician answers the question only with Yes or No. The system uses a top-down method to analyze the ECG information. During the interaction with the system two domain specific tasks are evaluated. The system starts with analyzing the pacemaker ECGs. After completing the first task the system knows if there is a pacemaker malfunction (no output, intermittent output, noncapture, intermittent capture, oversensing or undersensing). When the malfunction type is identified, the system starts the second task; diagnosis of the pacemaker ECGs. During this task the system tries to determine the cause of the pacemaker malfunction (e.g., lead-electrode fracture, pacemaker configuration setting, pacemaker malfunction etcetera). If a cause for malfunction is found a solution (if possible) is given to solve the problem. Evaluation of this system by medical experts has been demonstrated that it mimics an instructional assistant in a consistent and reliable manner. Although the system has not been tested extensively, preliminarily tests show that prototype could identify all the presented test cases. The expert system described in this report is still a prototype. The knowledge-base contains the domain specific knowledge of the single chamber pacemaker. The domain specific knowlegde of dual chamber pacemakers is not yet implemented in the system. So further developement on the expert system is necessary.
-74-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.J.G. Custers 12 juni 1997 Speech recognition for an environmental control system dr.ir. P.J.M. Cluitmans, ir. W.H. Leliveld, H.J.M. Ossevoort prof.dr.ir. P.P.J. v.d. Bosch
Summary:
This report covers a feasibility study to employ present speech recognition technology for environmental control by motor disabled persons. The speech recognition system serves as an input device to operate the X-10 environmental control system by voice commands. The speech driven interface, should be small, stand-alone, low cost, and has to perform speaker dependent isolated word recognition for approximately 30 words. Considering the costs, the speech driven interface can best be built using standard components. An extensive field study on commercial speech recognition IC's led to five potential candidates. The MSM6679 voice recognition processor from Oki Semiconductors meets the requirements for the speech recognition system, and is selected. Several tests on speaker dependent isolated word recognition were conducted, to obtain an honest indication of the MSM6679's recognition performance. The results were encouraging (%correct > 92%), and the project was continued with the MSM6679. The recognition performance of the complete system can be increased with the use of a sophisticated user interface. A literature study to user interface design aspects for voice controlled devices resulted in many useful proposals regarding: feedback, vocabulary management, microphone placement and training of the system. These human factor proposals are incorporated in the design, and together with the feedback features of the MSM6679, they provide for a sophisticated comand dialog. The implementation comprises a micro-controller (pC) to command the host driven speech recognition processor to perform the various recognition and synthesis tasks. Next to this, the pC performs the high level operations to support the user interface and vocabulary management. Since the communication of the X-10 system is performed over the mains, there is a substantial danger for leakage currents. To avoid any physical contact with the mains, the X-10 system is operated by infra-red signals. The infra-red codes are composed in software on the pC, and sent by an infra-red transmitter circuitry. Beacause of that, the device can be used to generate infra-red signals for operation of other appliances such as TV's, and radio's, as well. This extends the input device for the X-10 system to a flexible speech driven remote control. The eventual experimental prototype can wirelessly operate X-1 0 appliances, by voice commands. Therefore, it exploits a sophisticated command dialog with extensive auditive feedback. Several human factor design proposals are incorporated in the prototype. Nevertheless, as functionality is the most important criterion for the design in this first stage of the project, the implementation is not yet optimized for energy consumption, size and user friendliness. In future, further work has to be done in optimizing the system.
-75-
Naam kandidaat: Afstudeerdtum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.B.M. Grit 28 augustus 1997 Educational simulation of the electro-encephalogram (EEG) dr.ir. P.J.M. Cluitmans, dr.ir. N.A.M. de Beer, dr.ir. W.L. van Meurs prof.dr.ir. P.P.J. van den Bosch
Summary:
Anesthesia protects patients undergoing surgical procedures from unnecessary pain and damage by inducing unconsciousness and analgesia, and by monitoring vital organs. The electroencephalogram (EEG) is monitored for the interpretation and management of the stages of anesthesia and for the early revelation of hypoxia or damage to the brain. The Human Patient Simulator (HPS) is a full-scale patient simulator that consists of a patient-like mannequin that exhibits clinical signs and responds to therapeutic interventions in a realistic way. A neurophysiological extension of the HPS will be used for teaching anesthesia residents the handling of EEG monitoring equipment, the effects of anesthesia on the EEG, the effects of surgery on the EEG, and the differentiation of these effects. As a basis for the simulation of EEGs, an existing signal generator is described. This model-driven EEG signal generator simulates EEG signal components rather than the neuronal structure underlying the signal. The signal generator allows independent variation of the power or amplitude in each of the conventional EEG frequency bands. This is obtained by filtering a Gaussian white noise in five different bands, each with its independent variable gain. As a part of the Masters project, the frequency transfers of the five band filters are designed to resemble representative peaks in the frequency spectrum of real human EEGs. Peaks in these real EEGs are parametrized and these parameters are used as the requirements for the filter frequency characteristics. The parameters are fitted by means of a narrow ideal bandpass filter, combined with a Hann window. According to anesthesiologists with EEG monitoring experience, the patterns simulated with the EEG signal generator in which these filters are implemented, look like typical EEGs and bear enough resemblance to actual EEGs to be used for educational simulation. As a foundation for the modeling of drug effects on the EEG, the current insights in pharmacokinetic and pharmacodynamic modeling are summarized. A compartment model is used to predict the apparent effector site concentration, the effector site in this case being the brain. Drug effects on the EEG are modeled as a function of this concentration. From information about the effects of the important intravenous hypnotic propofol on the EEG, resulting from a literature search, a piece-wise linear relation between the propofol effector site concentration and the EEG effects is derived. A simulation of the effects of propofol with this model showed that it was necessary to shift the delta band filter towards lower frequencies. According to three anesthesiologists with EEG monitoring experience, a simulation of the effects burst suppression: an alternating pattern of EEG bursts and silences. In order to facilitate the extraction of burst suppression parameters from the scientific literature, the existing burst suppression model is modified. Suppression ratio and suppression duration are modeled as a linear function of the effector site concentration. In the chosen model the (dependent) burst duration is also a linear function of the concentration. According to an EEG expert burst suppression is simulated realistically enough for educational simulation. For the implementation of scenarios like carotid endarterectomy, the EEG simulator has to be extended with a second channel, the effects of damage and different stages of hypoxia in the brain, and the effects of body temperature on the EEG. The addition of more drugs to the model will enlarge its field of application. The ultimate educational value of the EEG simulator has to be formally evaluated, with the help of clinical instructors in a controlled training environment. Possible future application areas for an EEG simulator, based on the current design, are the simulation of epileptic seizures, EEG monitoring of patients with head trauma in the ICU, and EEGs during various sleep stages.
-76-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.P.J. Heesakkers 28 augustus 1997 Software controlled proportional valve for NIBP measurements. dr.ir. P.J.M. Cluitmans, A.W. Bodbijl (Drager) prof.dr.ir. P.P.J. v.d. Bosch
Summary:
The blood pressure in the human body can be determined by pressurizing a cuff that is wrapped around a persons arm and measuring pressure pulsations inside the cuff, while the cuff is deflated. The oscillometric method of Non Invasive Blood Pressure (NIBP) measurement uses this concept for the determination of systolic-, diastolic- and mean blood pressure. The equipment that is engineered by Drager is able to perform automated NIBP measurement by pressing one button, thereby facilitating the work of physicians and nursing staff. The equipment uses a pump to inflate a cuff to an initial cuff pressure and uses two deflate valves to deflate the cuff pressure. The cuff pressure is measured with a pressure transducer. Because the valves that deflate the cuff have only two positions; open or shut, the cuff is deflated stepwise (steps of approximately 8 mmHg are taken). A deflate valve is opened shortly to deflate the cuff to a preferred pressure level. The valve is then closed and the pressure pulsations in the cuff at this pressure level are measured. The amplitude of the pulsations ranges (for most patients) from 0 to approximately 5 mmHg, depending on the cuff pressure. After a number of pulsations have been measured, the cuff is deflated again to the next pressure level. The amplitude of the measured pulses at each level is used to calculate blood pressure. Lately, a new generation of valves has become available, proportional valves, which offer current (or voltage) controllable flow restriction at a cheap price. The purpose of this research was to investigate the usability of a proportional valve for NIBP measurement. The possibility of linear deflation with a proportional valve was investigated. For this investigation, the valve was integrated in a test setup with a Dialog 2000 monitor device and a personal computer. For this setup a data acquisition system was developed. Using the data acquisition system, a digital feed forward PI controller was implemented that performs linear cuff deflation by using the proportional valve. The data acquisition and control system was developed according to the strategies for real-time system specification from Hatley & Pirbhai and the Drager coding standard. The proportional valve is a non linear component, which has a hysteresis of 15%, a dead zone of approximately 50% of the control voltage range and an undefined point at which the plunger first reacts to the applied voltage (end of the dead zone). Despite these properties, measurements show that with the current setup. The cuff pressure can be controlled within 12 mmHg accuracy. The maximum accuracy is limited by the sample rate of the digital controller and the resolution of the analog-to-digital and digital-to-analog conversions. The advantages the new method of deflation offers are: estimated 25% shorter average measurement time and a more comfortable measurement for the patient. Investigations remain to be done on how the measurement time can be shortened even more in relation to the patients heart rate. Also the integration of the data acquisition and control system in a monitor device remains subject of future research.
-77-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.J.H. van den Hoek 28 augustus 1997 Searching for rules to detect cardiac failure in patients in an intensive care unit dr.ir. J.A. Blom, dr.ir. P.J.M. Cluitmans prof.dr.ir. P.P.J. v.d. Bosch
Summary:
A M.Sc. thesis research was carried out with respect to the detection of cardiac failures in patients in an intensive care unit in a hospital. The investigation is part of a research project at TU Eindhoven, the objective is the formulation of a knowledge base in the form of rules, that allow for a reliable detection of cardiac failures, whilst avoiding false alarms. These rules are meant to be part of a future patient monitoring system in an intensive care unit. Patient data from the European Improve project has been used for analysis. This data base contains data on cardiac failures and a number of measurements on various variables measured on patients in an intensive care unit (e.g. peripheral temperature, cardiac index, pulmonary capillary wedge pressure). The knowledge discovery process to develop the rules for the expert system includes five steps: data selection, pre-processing, transformations, data mining and interpretation, of which the data mining and interpretation are the most complex and laborious steps. During the Improve project a rule of thumb to detect cardiac failure was formulated by a clinical task group, which has been used as a basis for various analyses. A detection rule was formulated in the form of boolean expressions. An analysis program was developed that checks whether, within a specified time window, the detection rule is capable of detecting the start of a cardiac failure period. In order to optimise the result a number of parameter studies have been executed. In addition a comparison has been made of the variables suggested by the clinical task group between cardiac failure and non-cardiac failure periods. The analysis leads to the conclusion that effective automatic detection of cardiac failures, is possible but requires individual patient related parameters of the detection rule. A general parameter set leads to about 64% sensitivity, but a positive predictive value of only 9%. Further research is recommended with respect to the relation between individual patient parameters and a patient's characteristics (weight, age, etc.), the predictive value of other aspects of the variables analysed (rate of change rather than boundary values) and possibly other variables (ECG's).
-78-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
0. Margarita 12 juni 1997 Estimation of a smoothing parameter for a spherical spline interpolation dr.ir. P.J.M. Cluitmans, ir. M.J. Quist prof.dr.ir. P.P.J. v.d. Bosch
Summary:
A helpful instrument in visualizing spatial aspect of EEG-data is a topographic program of Scalp Potentials (SP). Because the data are measured only at some places, interpolation plays an important role in brain mapping. From former research there are two important interpolation techniques that are suitable for brain mapping, namely - K-Nearest Neighbours - Spline Interpolation Spline interpolation may be applied to arbitrarily shaped 3-d surfaces (Thin-Plate spline) and to spheres (Spherical Spline). From research done at the Medical Engineering Group of the Eindhoven University of Technology it seems that spherical spline interpolation results the best for brain mapping. Spherical spline interpolation has a few advantages above the other interpolation methods mentioned above. - Spherical spline has no discontinuities and the extrema are not necessarily located at electrode positions. The extrema are located anywhere on the scalp. - Spherical spline offers the possibility to a fairly simple computation of Scalp Current source Density. Scalp Current source Density (SCD) is also a way to represent a topographic brain map. SCD is a method to present EEG-data reference-free, with an emphasis on local phenomena. Other advantages of SCD above SP is that SCD suffers less of smearing effect. For a reliable topographic map both SCD and SP maps are needed. - Another advantage of spherical spline interpolation is that smoothing of the interpolation can be implemented very easy. A smoothing paramter A is useful when the measured data contain noise. Smoothing increases the signal-to-noise ratio by smearing out the noise. In this study a smoothing parameter A and an order m of the spline are estimated using "leave-one-out-method" as described by Wahba. The root mean square of the errors in the estimations on each omitted electrode position is minimized as a function of A and m. This provide a quality measure for the estimation. The software that is implemented for this project is an extension of an existing package, which is developed at the Eindhoven University of Technology. The extended software has the following options: - plotting of SP and SCD maps using (smoothed) spherical spline interpolation. - adjusting the order m of the spherical spline. (2 :S m :S 5) -estimation of a smoothing parameter A.
-79-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.H.L. Salden 24 april 1997 EEG stationarity detection using classical and autoregressive methods dr.ir. P.J.M. Cluitmans, ir. M. van de Velde prof.dr.ir. P.P.J. v.d. Bosch
Summary:
The work described in this report is part of a project that aims at the development of neurophysiological monitoring technologies for critical care, especially during anesthesia. Recording of the electroencephalogram (EEG) and Evoked Potentials (EP) is one of the measurement techniques that is employed in research on monitoring of a subject's anesthetic depth during surgery. Automatic and adequate EEg signal validation before analysis, i.e. ensuring signal quality, plays a very important role. Directly related to signal quality is signal stationarity, and this report desribes several investigations towards stationarity detection methods. A literature study was performed to investigate the work of other researchers in the field of EEG analysis and what they had done regarding the detection of EEG signal stationarity. This did not lead to satisfactory results. There is a large gap between theoretical definitions (and methods of detection) of signal stationarity and those that are useful in practical applications. Two feasibility studies were performed, to see if tests based on Classical parameters and/or Autoregressive (AR) parameters are able to detect signal variations in the EEG and are thereby indicative for EEG signal stationarity. These tests operate on a fixed signal window and use a threshold level for detection of stationarity. A set of EEG's classified by an expert in "stationary" and "non-stationary" signal periods was used for test validation in these studies. Based on a criterion of a non-critical acceptance of stationarity, "best" tests were selected from those that resulted from the two feasibility studies. A scoring-algorithm was devised, based upon a moving window implementation of these best tests, that gives "scores" to each score interval. One test was further employed in this algorithm to score EEG signals containing Auditory Evoked Potentials (AEP's), using an other EEG data set from a clinical study. New AEP calculations were done, using these EEG's, excluding intervals with a low score, to see if this exclusion would lead to an improved AEP quality. The conclusion resulting from the two feasibility studies is that both classical as AR parameter based tests give a very acceptable performance when used for EEG signal stationarity detection. From the experiments with the scoring algorithm, no conclusions regarding universal application of this test method could be drawn.
-80-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.A.G. Theunissen 24 april 1997 Regulation of arterial partial pressure of oxygen during Cardia Pulmonary Bypass dr.ir. P.J.M. Cluitmans, dr.ir. J.A. Blom, C. Visser (AZM) prof.dr.ir. P.P.J. v.d. Bosch
Summary:
Cardio-pulmonary bypass is the technique of bypassing the lungs and heart during cardiac surgery. The function of the lung is replaced by an oxygenator which exchanges oxygen and carbon dioxide with the blood. The partial arterial pressure of oxygen (p8 0 2 ) is a good indication of the amount of oxygen in the arterial blood. At present controlling the pp2 is usually done with manual control of the composition of the gasflow towards the oxygenator. Because of this manual control the pp2 is subject to heavy fluctuations. A more constant value of about 20 kPa is beneficial for the patient so a regulation system for the p8 0 2 is desirable. The controlled process consists of the oxygenator and a p0 2 sensor. The transfer function of the oxygenator is non linear and there are many factors affecting this transfer function. The p0 2 sensor introduces a large delay in the control loop. As a controller a PID type is chosen. To acquire a parameter set for this PID controller resulting in a satisfactory control system, first a number of open loop step responses were taken from the process. From these the tuning parameters were calculated with the tuning rules of Ziegler and Nichols. The worst case circumstances were estimated and the calculated tuning parameters were adapted to this worst case. The controller was tested during three operations on goats. As far as the oxygen consumption is concerned these goals were comparable with adult man. After some minor adaptions during these operations the controller behaved very well. Furthermore the controller was tested in an vitro test setup. During this test large disturbances were introduced. The controller managed to bring the p0 2 back to its setpoint fast enough and with an overshoot small enough. From the tests it appeared that the controller tuning does not really affect the peak value of the overshoot. Therefore the controller can be tuned very conservatively without introducing large overshoots. Because of this conservative tuning the probability that the controlled process becomes instable is very small, even after future introduction of new and different equipment.
-81-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.M.H. Willems 11 december 1997 Simulaties van neuronale signalen met behulp van de NEURON ontwikkelomgeving dr.ir. P.J.M. Cluitmans prof.dr.ir. P.P.J. van den Bosch
Samenvatting:
Dit afstudeerverslag beschrijft simulaties van neuronale signalen die met behulp van de NEURON ontwikkelomgeving geschreven zijn. Deze simulaties zullen gebruikt worden als deel van het onderwijsprogramma voor de eerstejaars studenten aan het opleidingsinstituut Biomedische Technologie aan de Technische Universiteit Eindhoven. Aangezien het jaar dat dit project uitgevoerd werd het eerste jaar is voor dit instituut zal aile lesmateriaal nieuw ontwikkeld moeten worden. NEURON is een zenuw simulatie programma dat ontworpen is rand het centrale idee van eendimensionale kabels welke verbonden kunnen worden tot een willekeurig vertakte kabel en waarvan de eigenschappen continu kunnen vari!ren langs de lengte-as. Het ontwerp-doel is de fysische eigenschappen te scheiden van de numerieke segmentering van de kabels. De simulaties zijn bedoeld om een vervanging te vormen voor de laboratorium-experimenten die ze beschrijven. Op deze manier kan de student gevoel ontwikkelen voor de soort signalen die gemeten kunnen worden aan zenuwvezels, en de mechanismes die dit veroorzaken. De modellering is zo nauwkeurig mogelijk gebeurd ten einde een realistische simulatie te garanderen. De eerste serie simulaties behandelt de verschijnselen die de rustpotentiaal over het membraan en de passieve en actieve signaalvoortgeleiding door de eel veroorzaken. De tweede simulatie verbeeldt de menselijke kniepees-reflex, een complete reflexboog, als voorbeeld van een biologisch regelsysteem. Deze simulatie toont dus de propagatie van zenuwpulsen door een groat deel van het menselijk lichaam. Het NEURON-programma is het meest efficient als het ingezet wordt voor simulaties varierend van delen van een eel tot enkele cellen. De eerste set simulaties van een grate eel laat zien dat hier inderdaad snelle en nauwkeurige simulaties verwacht mogen worden. De tweede casus toont echter aan dat zelfs een groat en complex systeem van neuronen, weliswaar met enkele benaderingen en beperkte snelheid, kan worden gesimuleerd.
-82-
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
Begeleiding: Afstudeerhoogleraar:
P.H.A.G. Wouters 13 februari 1997 De graduele ontwikkeling van een beslissingsondersteunend expertsysteem voor de evaluatie van de pacemaker-functie: een tweede prototype dr.ir. P.J.M. Cluitmans, dr.ir. J.A. Blom prof.dr.ir. P.P.J. v.d. Bosch
Samenvatting
De interpretatie van pacemaker ECG is vooral voor beginnende diagnostici een lastige procedure. Om dit probleem op te lossen is er een samenwerkingsverband ontstaan tussen de sectie Medische Elektrotechniek van de Technische Universiteit Eindhoven en de hartcatherisatie-afdeling van het Catharina Ziekenhuis. Het hoofddoel van het project is om ondersteuning te bieden bij de evaluatie van de verschillende pacemaker-functies. Dit doel kan worden bereikt door de ontwikkeling van een expertsysteem. In 1994 heeft ir. Bourgonje de basis gelegd met een eerste prototype expertsysteem. Seide partijen zijn uiterst positief over het resultaat, zodat een vervolgonderzoek is gestart. Deze scriptie concentreert zich op de volledige ontwikkeling van een systeemtaak: de analyse van het pacemaker ECG. Deze systeemtaak bestaat uit twee subtaken. Tijdens de eerste taak evalueren we de vorm van het ECG-signaal. lndien het ECG-signaal geclassificeerd kan worden in intrinsieke slagen en in gestimuleerde slagen zullen we de tijdsintervallen tussen twee opeenvolgende slagen bekijken. Is dit niet het geval hebben we een probleemtoestand gevonden voor de pacemaker. De tijdsintervallen tussen twee opeenvolgende slagen is afhankelijk van de hart-pacemaker interactie. De tijdsintervallen zijn dus verbonden met enkele voorwaarden om een correcte pacemaker-functie te garanderen. De resultaten van het tweede prototype zijn veelbelovend. De evaluatie is uitgevoerd met een testpopulatie van 25 casussen. In 96% van de gevallen is met behulp van het expertsysteem tot de juiste conclusie gekomen. Om de functionaliteit van het expertsysteem te verhogen is het aanbevelenswaardig het systeem te integreren in een operationele omgeving. Het expertsysteem is in zijn huidige vorm niet gebruikersvriendelijk. De oorspronkelijke doelstelling vereist dan ook dat in de toekomst hieraan meer aandacht moet worden besteed.
-83-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.G. van der Zee 16 oktober 1997 dr. P. Cluitmans, ir. W. Leliveld prof.dr. P. v.d. Bosch
Summary:
The breathing volume taken by a horse informs about the condition the horse is in. Under anaesthesia it can be used as an indication of the depth of anaesthesia and during training the breath volume indicates the level of fatigue of the horse. The methods normally used to measure the breath volume make use of a breathing mask. One of the disadvantages of this breathing mask is that it obstructs the airways and so changes the breathing pattern. Besides this, the horse tends to get restless from the equipment put on his nose. These problems with existing methods resulted in a need for a new non-obstructive method for measuring the breath volume taken by the horse. The method described in this report makes use of 40kHz ultrasound to follow the lung margin of the horse. From the changes in lung-margin location it seems to be possible to follow the changes in breathing volume. Ultrasound is unable to penetrate through air-tissue interfaces and this complicates the use of ultrasound with horses. The horse has a thick fur that consists of a lot of air. This fur reflects a large amount of the ultrasound sent by the transmitters and results in very noisy measured signals because of the large amplifications needed to still be able to see anything. Because of the large amplifications the equipment is extremely sensitive for movements of the sensors what makes the fixing of the sensors to the skin a very important task. One way to ease the problems is to shave the horse. This makes it possible to reduce the amplification and so decreases the sensitivity to movement of the sensors. Because one of the demands on the equipment was that shaving the horse was not allowed the conclusion is that this method with ultrasound is not usable to measure the breath volume with horses. Because humans do not have a fur like horses the 40kHz ultrasound can be used for other measurements on humans. In one of the tests the beating of the heart is clearly seen on the computer screen.
-84-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerdhoogleraar:
A.M.M. Bellussi 16 oktobewr 1997 Volgmethode voor versmeerde rontgenmarkerschaduwen. dr.ir. P. Cluitmans prof.dr. Hasman
Samenvatting:
Aan de Universiteit Maastricht wordt onderzoek gedaan naar de lokale groei van het hart. Hiertoe worden bij honden metalen bolletjes in de hartwand ge'implanteerd. Over een periode van maanden worden de honden regelmatig doorgelicht met rontgen, waarvan digitale beelden worden gemaakt. De metalen bolletjes worden afgebeeld als donkere schaduwen, die soms versmeerd zijn door bewegingsonscherpte. Met de huidige technieken zijn de versmeerde schaduwen niet goed identificeerbaar. In deze afstudeerstudie is een methode ontwikkeld waarmee (versmeerde) markerschaduwen ge'identificeerd en gevolgd kunnen worden. De volgmethode is gebaseerd op een hier ontwikkeld model van de markerbaan. In een optimalisatieproces convergeert het model naar de werkelijke markerbaan. In beelden waarin kunstmatig versmeerde markerschaduwen zijn aangebracht, zijn de verschillen gemeten tussen gedefinieerde marker-centrumposities en door de volgmethode geschatte markercentrumposities. De verschillen zijn maximaal 15% van de markerdiameter in pixels. De ontwikkelde methode is in staat de versmeerde markerschaduwen in echte videobeelden automatisch te volgen. Hierbij is de invloed van het contrast van de markerschaduwen op het resultaat onderzocht.
-85-
LEERSTOEL ELEKTROMECHANICA & VERMOGENSELEKTRONICA
-87-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
A.C.P. de Klerk Rapportnr.: EMV 97-12 28 augustus 1997 Design of a short stroke actuator for the ATLAS reticle stage dr.ir. J.C. Compter (Philips CFT) I ir. A.T.A. Peijnenburg prof.dr.ir. E.M.H. Kamerbeek
Summary:
The department Mechatronics at Philips CFT is involved in the development of wafer steppers for ASM Lithography (ASML). A prototype of the first machine of a new generation of wafer steppers, which has to be on the market in the year 1999, is developed at Philips CFT in co-operation with ASML. A wafer stepper exists of a number of modules, namely the wafer stage, the reticle stage, the lens unit and the illumination unit. The wafer stage positions the wafer beneath the lens. The reticle stage is mounted between the lens and the illumination unit, and positions the reticle. A part of the prototype being developed at Philips CFT is the short stroke actuator for the reticle stage. This actuator is based on the Lorentz force principle and is used for extreme accurate positioning in three degrees of freedom. The actuator that has to be developed has very high specifications on the motor constant, disturbance forces, the temperature rise and the dynamical performance. To compare the performance between different designs, a figure of merit is introduced. With this figure of merit the best design concerning the performance can be chosen. The actuator design has been optimised using finite element packages and an analytical model. Two design phases were necessary to arrive at a suitable actuator. This actuator has been built and experiments have been carried out. The experiments show that the designed actuator fulfils the defined specifications. Unwanted effects occurred during the experiments, namely damping and disturbance forces. These unwanted effects need to be investigated further to get an idea of their influence on the ultimate performance of the actuator.
-88-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.G.J. Tilleman Rapportnr.: EMV 97-16 11 december 1997 Kleefkoppelbeheersing in borstelloze gelijkstroommotoren. ir. J.C.M. van Hoek, ir. K.M. Dieleman (beide van PreMoTec, v.h. Philips Mechatronics) prof.dr.ir. E.M.H. Kamerbeek
Samenvatting:
Het afstudeerwerk werd verricht voor PreMoTec te Dordrecht, voorheen Philips Mechatronics, een bedrijf waar ontwikkeling en produktie van kleine gelijkstroommotoren met permanent-magnetische bekrachtiging (p.m. motoren) plaatsvindt. Bij deze motoren worden de koppels die zich voordoen in stroomloze toestand van de wikkelingen kleefkoppels (Eng. Cogging torques) genoemd. Zij treden op als het bewikkelde deel van zachtmagnetisch materiaal is gemaakt en voorzien is van gleuven waarin de wikkelingen zijn ondergebracht. Kleefkoppels veroorzaken extra snelheidsvariaties en zijn dus over het algemeen ongewenst. Uit de literatuur bekende kleefkoppel-reductiemethoden blijken niet praktisch bruikbaar te zijn voor het oplossen van de problemen bij de motoren van PreMoTec. Het afstudeerwerk is daarom vooral gericht op het verklaren van de verschillende kleefkoppelcomponenten in de motoren van PreMoTec -in het bijzonder voor de standaard borstelloze gelijkstroommotor met 9 statorgleuven en 12 rotormagneetpolen-, in de verwachting dat een beter begrip van het 'kleefkoppelmechanisme', bij deze motor oak zal leiden tot een betere beheersbaarheid van de vorming van kleefkoppels. In het begin van het onderzoek is uitgegaan van een bekende theorie I, waarin gebruik wordt gemaakt van een periodieke geleidbaarheidsfunctie voor de vertande luchtspleet tussen stator- en rotorijzer. Deze functie is onafhankelijk van het magnetisatiepatroon van de magneet. Allereerst werd via metingen bevestigd dat theorie I niet bruikbaar is voor de berekening van kleefkoppels. Daarom is theorie I tijdens dit afstudeerwerk uitgebreid en verfijnd tot een theorie II, waarin de invloed van de vertande luchtspleet wordt beschreven met meerdere coefficienten die wei afhankelijk zijn van de rangnummers van de magnetisatiecomponenten van de rotormagneet. Deze coefficienten kunnen worden berekend met behulp van veldsimulaties (eindige elementen methode). Oak kon via deze simulaties worden bbevestigd dat theorie I in het algemeen niet bruikbaar is voor kwantitatieve beschouwingen, en in het bijzonder niet voor de bestudeerde motor. Met behulp van theorie II kunnen in principe aile waargenomen kleefkoppelcomponenten worden verklaard. Bovendien kunnen deze componenten worden uitgedrukt in de grootte van te meten magnetisatiecomponenten en door middel van simulatie te bepalen coefficienten. De metingen en berekeningen van het kleefkoppel komen redelijk geed met elkaar overeen. Betere resultaten kunnen worden verwacht als de beschrijving van de magnetisatie van de magneet wordt verbeterd. Tot dusver wordt de magnetisatie verdisconteerd in een door metingen te bepalen fictieve gemiddelde magnetisatie ter plaatse van de gemiddelde radius van de rotormagneet. Met behulp van theorie II worden oak mogelijke verklaringen en uitdrukkingen gegeven voor waar te nemen stoorcomponenten in het kleefkoppelspectrum, die hun ontstaan danken aan de aanwezigheid van kleine tot zeer kleine stoorcomponenten (asymmetrieen) in de rotormagnetisatie of in de geometrie van de statorvertanding. De grondgolf uit het kleefkoppelspectrum (rangnummer-36) kan worden geminimaliseerd door een bepaalde breedte van de gleufopening te kiezen. Gebleken is dat dan andere kleefkoppelcomponenten extra de kop opsteken. De ontwikkelde theorie II zou in de toekomst gebruikt kunnen worden voor optimalisatie van het totale kleefkoppelspectrum, althans indien aile van belang zijnde magnetisatiecomponenten in de magneet qua grootte en fase voldoende nauwkeurig bekend zijn, en dan het liefst oak nag als functie van de radius binnen de magneet.
-89-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.J.J. Domensino rapportnr.: EMV 97-05 24 april 1997 Class D amplifier design with low output impedance dr.ir. A. Veltman prof.ir. J. Rozenboom
Summary:
Conventional class A and B amplifiers have a very poor power efficiency. The increasing need for small, efficient and portable amplifiers has encouraged the research for a more energy efficient way of power amplification. Class D amplification is such a method and an amplifier design based on this principle is presented here. Most class-D amplifiers developed at this moment are based on the principle of Direct Duty cycle Control, which implies a large amplifier sensitivity to internal and external influences. Moreover the output characteristics of the amplifier are determined by the, for class-D amplification necessary, output filter. The design process is started of with an already realized class-D amplifier using the Integral Pulse Control method. This method ensures that the mean output voltage of the power stage is equal to the reference voltage. This IPC based amplifier is already insensitive to influences like power supply and parameter variations, but still does not include the output filter in the control loop. Consequently the output characteristics are still determined by the output filter. This IPC method is transformed into a new control method based on hysteresis control of the output filter capacitor current, thereby including the output filter in the control loop. The applied way of system control theoretically reduces the order of the output filter by one if, and only if, the current through the filter inductor L, is controlled perfectly, the inductance no longer has impact on the output characteristics. The implementation of this control method requires the measurement of the output filter capacitor current in a frquency range of 1OHz to 2M Hz. Therefore a circuit with a coaxial wound current transformer and an active load is designed which resulted in a frequency bandwidth of 1OHz to 30 MHz within 3 dB. Though this circuit has a bandwidth unnecessarily large for this application, it's design can be useful to lots of other applications. Because of the differentiating character of the capacitor current relative to the output volgage, the principle requires a good suppression of high frequency components of both the differential and common mode output signal. A suitable output filter combining differential and common mode filtering in one, small dimensioned, magnetic part is developed. The secondary circuits of the amplifier all have standard implementation and are discussed only shortly. The final amplifier, though not jet optimally dimensioned, has been tested. The test results, like a power supply ripple suppression of more than 60 dB and an output resistance of about 20mn at 10 kHz are very promising and this amplifier design certainly has a lot of potential.
-90-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
Rapportnr.: EMV 97-04 A.W. v.d. Heuvel 24 april 1997 Power feedback circuits for lighting purposes. Elektrische Energietechniek dr. J.L. Duarte, ir. J. Willaert (Philips Lighting B.V.) prof.ir. J. Rozenboom
Summary:
This report describes circuits which are used for lighting applications. The circuits described here all draw a current from the mains which has a shape which is very close to a sinusoidal. In former circuits extra circuitry was needed to shape input current, with the type of circuits described here this is not necessary. This means that less components are needed, this reduces the total costs of the circuit, at the same time reliability is improved. A disadvantage of the circuits described in this report is that it is not possible to control light intensity. The goal of this report is to develop a theory to describe circuits which use the so-called power feedback option. This principle is explained with the help of a charge pump model. With this model it is possible to describe how the circuit operates, it is also possible to determine which requirements have to be met to obtain a sinusoidal mains current. In the first case a voltage source is used in the model. In the second case the voltage source is replaced by a current source. This leads to two different (but at the same time very similar) models and two different circuits. These are circuits with one power feedback path. The components of the obtained circuits are determined. The circuits are simulated and built for measurements. In this way a comparison between theory and the practical results can be made. Non-ideal effects are explained and (if possible) these effects are eliminated. With these single feedback circuits it is possible to obtain a CF env which is close to 1, but in that case the EMI is very high. It is also possible to decrease EMf but this will increase CFenv to about 2. The THD of mains current with single feedback circuits is bout 10%. If the two models are combined a third model is developed. This is a model for the so-called double power feedback circuits. It is assumed that each feedback paths draws a sinusoidal current from the mains. In that case the resulting mains current will be sinusoidal too. These circuits are used to reduce or eliminate the disadvantages of the single power feedback circuits. This means that CF env is low enough, and at the same time the expected EMI is very low. With these circuits there are two feedback paths, this means that there are more requirements which have to be met. For this reason it is more difficult to determine component values. Rules to calculate these components are given. Although it will be possible to calculate component values for a given load power and resistance fine-tuning is still necessary. If the calculations, as described in this report, are used the CFenv will be below 1.8 and at the same time the THD of mains current is below about 10%.
-91-
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
Begeleiding: Afstudeerhoogleraar:
C.D.C. Hooijer Rapportnr. EMV 97-07 12 juni 1997 Single switch regulating ballast topologies - A circuit capable of dimming a fluorescent lamp while maintaining low input current distortion. ir. P. Arts (Philips) prof.ir. J. Rozenboom
Summary:
This report describes the results of my graduation project at Philips Lighting Eindhoven via the Eindhoven University of Technology. The report describes the investigation of potential single switch regulating ballast topologies for lighting applications. Many prior art electronic ballasts that perform both power factor correction and inverter functionality include two or more power switches (in the form of transistors). Because the cost of transistors is relatively high, reducing the number of transistors may have a significant effect on the cost of the ballast. The investigated ballasts fulfil the IEC 1000-3-2 class C requirements concerning line current harmonic distortion. The ultimate goal of this report is to demonstrate that (dimmable) single switch ballasts have the potential to be small and cost effective and can compete with the present ballasts, like the up-converter/half-bridge topologies. In the literature six interesting single switch ballasts were found. Through analysis of a series- and a parallel resonant soft-switching ballast topology it has been proved that these type of circuits are not very attractive as a ballast. Both ballasts strongly depend on their load (lamp resistance), are not suitable for light dimming, and may suffer from high voltage stress on the power swith (> 3x peak mains voltage). Analysis, simulation, design rules and experimental results are given of a small and cost effective ballast. The ballast drives a TL-5 49W lamp from the mains (230V- ), is capable of high power factor (0.99), low THD (12%) and high efficiency (87%). The voltage stress on the power switch (Vdss=800V) is twice the peak mains voltage under nominal operation. The operating frequency of the ballast is 50kHz. The lamp current is sinusoidal and the lamp power can be controlled through PWM (1%). Drawbacks of the circuit build are the (excessive) power dissipation in the power switch (3 - 4w), due to hard switching, and the voltage increase across the switch when the lamp is dimmed to a (very) low level. Nevertheless, the circuit offers good performance and proves to be small and cost effective. Referring to the single switch ballasts found in literature; single switch ballasts require more magnetic components (in the form of inductors) than the present ballasts but less silicon is used. Compared to up-converter/half-bridge topologies a high-voltage IC and a switch is exchanged with a high-volgtage/high-current switch. Also a switch is exchanged with an inductor. Further investigation of the single switch ballast and/or other single switch ballasts is recommended since not all possibilities have been exploited.
-92-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
Afstudeerhoog leraar:
M.M.P. Curvers Rapport nr. EMV 97-01 13 februari 1997 Een brushless DC motor als hoofdaandrijving van een kopieerapparaat prof.dr.ir. A.J.A. Vandenput ir. J. DeJ. Manrique Lizarraga (Oce-Nederland B.V.) ing. S. Winteraeken (Oce-Nederland B.V.) prof.dr.ir. A.J.A. Vandenput
Samenvatting:
Op dit moment wordt de hoofdaandrijving van kopieerapparaten gebaseerd op de Oce 3045 en Oce 3165, gerealiseerd met behulp van een asynchrone motor in combinatie met een tandwielkast. In dit afstudeerwerk is onderzocht of deze aandrijving vervangen kan worden door een brushless DC motor. In het eisenpakket is op de eerste plaats uitgegaan van een black-box oplossing die de huidige oplossing met asynchrone motor kan vervangen zonder grote ontwikkelingsinspanningen en aanpassingen vanuit Oce. Er is op dit moment een tweetal fabrikanten die hebben aangegeven dat ze een dergelijke oplossing op korte termijn willen uitwerken. Naast de black-box oplossing worden ook voorstellen gedaan voor het zelf bouwen van een aansturing voor de brushless DC motor. Voor eenvoudige applicaties zijn er diverse IC's op de markt die de aansturing, eventueel in combinatie met een externe eindtrap, voor hun rekening kunnen nemen. Wanneer bijvoorbeeld een uitgebreide foutafhandeling gewenst is, komt de implementatie van de aansturing in een microprocessor in beeld. Deze oplossing wordt daarom verder uitgewerkt door een stagiair. Om effecten van bijvoorbeeld lastvariaties en foutsituaties op vooral de stroomvorm te kunnen voorspellen, is een simulatiemodel opgesteld van de aandrijving in het simulatieprogramma Matlab (in combinatie met Simulink). De opbouw van deze simulatie wordt compleet beschreven. Bovendien is een aantal metingen gedaan om het model te verifieren. Uit deze metingen blijkt dat het model goed voldoet aan de verwachtingen. Voorgesteld wordt om als aanvulling een goed lastmodel van het kopieerapparaat op te stellen, zodat wijzigingen in de aandrijvingen eerst kunnen worden doorgerekend. Een vergelijkingsonderzoek tussen de asynchrone en de brushless DC oplossing is verricht. Voor de asynchrone aandrijving is de Oce 3045 oplossing gebruikt, terwijl voor de brushless DC variant een bestaande motor en aansturing van het Amerikaanse bedrijf Aerotech is gekocht. Van belang bij de metingen waren vooral de snelheidsvariaties in de aandrijving ten gevolge van lastvariaties. Dynamisch gezien blijken beide motoren gelijkwaardig te zijn. Nadeel voor brushless DC is echter toch de lage massatraagheid en snelheid van de rotor ten gevolge van het ontbreken van een vertragingskast. Hierdoor is de in dit "vliegwiel" opgeslagen energie te klein voor het opvangen van lastvariaties. Snelheidsvariaties ten gevolge van veranderingen in de last dienen daarom door de snelheidsregelaar opgevangen te worden. Dit aspect dient goed bestudeerd te worden bij de ontwikkeling van de regelaar. Tenslotte zijn nog de verschillende typen van motoren met elkaar vergeleken om te kijken welke geschikt zijn als hoofdaandrijving. Bij het maken van keuzen zijn vier parameters belangrijk: Levensduur, kostprijs, betrouwbaarheid en verkrijgbaarheid. Wanneer deze parameters worden toegepast blijven er drie varianten over: De huidige asynchrone motor in combinatie met een tandwielkast en een brushless DC motor als stand-by aandrijving. Een brushless DC aandrijving zonder vertragingskast. Een gecombineerde asynchrone/synchrone motor inclusief tandwielkast. Omwille van de eenvoud in de ontwikkeling wordt voorgesteld om voorlopig verder te gaan met de asynchrone motor. De brushless DC aandrijving zal als zijlijn-onderzoek verder ontwikkeld worden. -93-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.E. Nillesen Rapportnr.: EMV 97-10 28 augustus 1997 Een Nieuw Model voor Flux Bepaling in een Asynchrone Machine met Sleepringen. dr.ing. F. Blaschke prof.dr.ir. A.J.A. Vandenput
Samenvatting:
In het begin van de elektrische aandrijftechniek werd de gelijkstroommachine vaak toegepast, omdat de realisatie van een hoogwaardige aandrijving daarmee relatief eenvoudig is. Echter een gelijkstroom-machine is uitgerust met een commutator, die veel onderhoud vergt en daarom is de industrie erg gernteresseerd in regeltechnieken voor draaiveldmachines. Dankzij ontwikkelingen in de jaren zeventig (Field Oriented Control) en tachtig (Direct SelfControl) wordt de inductiemachine veelvuldig toegepast in hoogwaardige aandrijvingen, omdat haar robuuste constructie zorgt voor lage kosten en onderhoud. Voor het kunnen toepassen van een veldgeorienteerde regeling dient de positie van de magnetische flux bekend te zijn. Met behulp van het U/1 model wordt deze bepaald vanuit de statorspanningen en -stromen. De nauwkeurigheid is uitstekend voor frequenties boven 5 Hz, echter beneden 5 Hz neemt de nauwkeurigheid af en het is helemaal niet in staat de flux te bepalen in stilstand. Daarom richt verder onderzoek in de elektrische aandrijftechniek zich vooral op het gebied rondom frequentie nul. Dankzij een uitvinding van Dr. Blaschke binnen de sectie "Eiektromechanica en Vermogense/ektronica" van de Technische Universiteit Eindhoven is een oplossing binnen handbereik. Het zuiver wetenschappelijk onderzoek in het kader van deze uitvinding maakt duidelijk dat een referentiemodel nodig is, dat de fluxpositie bepaalt in het hele frequentiegebied met hoge nauwkeurigheid. Daarom is een nieuw flux-model, het zogenoemde model C, afgeleid waarin de magnetiseringsstroom wordt bepaald vanuit de stator- en rotorstromen. Dit toont nogmaals het zuiver wetenschappelijke karakter van dit model, omdat de rotorstromen aileen gemeten kunnen worden in een asynchrone machine met sleepringen. Een aantal machineparameters zal gerdentificeerd moeten worden; daarom heeft Dr. Blaschke identificatieprocedures afgeleid. Het eerste doel van dit onderzoek bestond uit het simuleren van deze identificatietechnieken. Teneinde deze technieken te kunnen simuleren in real-time op een DSP-systeem, is een machinestructuur bernvloed met verzadiging noodzakelijk, omdat de identificatietechnieken juist gebruik maken van de verzadiging. De al aanwezige lineaire machinestructuur is uitgebreid met verzadiging. Eerst zijn de gevolgen van de toepassing van een lineair veldgeorienteerde regeling op de (niet-lineaire) machine- structuur geanalyseerd: in het verzadigde gebied zijn afwijkingen in koppel en flux waargenomen. Daarna is een niet-lineaire regeling toegepast, waarin geen afwijkingen zijn waargenomen. Vervolgens is model C aangesloten op de machinestructuur en zijn de identificatietechnieken gesimuleerd. De simulatie geeft inzicht op het belang van sommige gerdentificeerde parameters, omdat een kleine fout rampzalige gevolgen in de volgende identificaties kan veroorzaken. De simulaties zijn geverifieerd op een 30 kW inductiemachine met sleepringen gevoed door een hysterese stroomregeling. De tests in het laboratorium toonden aan, dat ijzerverliezen niet meer verwaarloosd mogen worden: daarom wordt er een variant van model C geTntroduceerd. Dit model bepaalt de flux-positie met een hoge nauwkeurigheid in het hele frequentiegebied in stationaire toestand. Niettemin zal dit voor de toepassing als referentiemodel in een zuiver wetenschappelijk onderzoek voldoen. De real-timne simulatie van de identificatietechnieken verschaft veel realistische informatie. Verder zorgt het stationaire model voor een hoge nauwkeurigheid in het hele frequentiegebied, ook tijdens verzadiging.
-94-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.L.A. Vergouwen Rapportnr.: EMV 97-11 28 augustus 1997 A unified power-flow controller as series compensator. dr. J.L. Duarte prof.dr.ir. A.J.A. Vandenput
Samenvatting:
Met een unified power-flow controller (UPFC) krijgt men de mogelijkheid om te voorzien in VARcompensatie, de lijnimpedantie, amplitude en fasehoek te regelen in een transmissielijn, harmonischen te isoleren en het vermogen in real time te regelen. Daarom voldoet een UPFC aan de normen voor flexible AC transmission systems (FACTS). Een UPFC bestaat uit een serie- en een parallelcompensator. De seriecompensator is in staat een spanning aan het net toe te voegen, de parallelle compensator om een stroom in het net te injecteren. Seide bestaan uit een spanningsgevoede inverter, die door middel van een transformator met het net is verbonden. In dit verslag worden eerst de werkgebieden van een UPFC beschreven. Later wordt de seriecompensator gesimuleerd. Door gebruik te maken van vectormodulatie wordt de uitgangsspanning van· de inverter geregeld. Een output-filter, die de door de inverter veroorzaakte hogere harmonischen wegfiltert, is oak gesimuleerd en tussen de inverter en de serietransformator geplaatst. De gesimuleerde serie-compensator werkt als harmonischen isolator door gebruik te maken van een proportionele regelaar. Er is een demonstratiemodel van de seriecompensator gebouwd. Dit model is niet aileen gemaakt om harmonischen te compenseren. Kleine veranderingen in het regelprogramma maken het oak mogelijk het model te Iaten werken als fase-draaier en amplitude-regelaar. De fase-draaier en amplitude-regelaar geven goede resultaten. De harmonischen compensator werkt aileen goed als deze is geprogrammerd om een enkele harmonischen weg te regelen. Er kan nog veel gedaan worden in het verbeteren van de seriecompensator, zoals een uitvoering met een meer geavanceerde regeling, en een verbetering van het output-filter. Bovendien kan er een transformator worden aangeschaft die beter geschikt is voor seriecompensatie. Uiteindelijk kan de seriecompensator worden gecombineerd met een parallelle compensator.
-95-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
Rapportnr.: EMV 97-09 A.A.G. van Zwam 28 augustus 1997 Realisatie van een industriE!Ie flux-georiE!nteerde gelijkrichter dr. J.L. Duarte, ir. C.G.E. Wijnands (Prodrive B.V.) prof.dr.ir. A.J.A. Vandenput
Samenvatting:
In de energievoorziening voor aandrijfsystemen wordt voornamelijk gebruik gemaakt van gelijkspanning. In het gelijkrichten van het drie fasen net bieden geschakelde gelijkrichters een aantal voordelen ten opzichte van net-gecommuteerde gelijkrichters: het net wordt belast met slechts sinusoidale stromen en de arbeidsfactor van de netstroom is instelbaar. Ook is de geschakelde gelijkrichter in staat energie terug te leveren aan het net en kan de DC-spanning boven de topwaarde van de netspanning worden uitgetild. In het onderzoek is een antwerp van een geschakelde gelijkrichter dat binnen de sectie Elektromechanica en Vermogenselektronica gebracht is tot het stadium van laboratorium-opstelling, doorontwikkeld tot een industrieel prototype. In het systeem wordt door een 'voltage source inverter' een drie fasen spanningsstelsel opgewekt; dit stelsel wordt met het publieke drie fasen net verbonden via een drietal zelfinductiviteiten. De invertor wordt aangestuurd middels een digitaal besturingssysteem, waarvan het hart wordt gevormd door een Digitale Signaal Processor TMS320C40 van Texas Instruments. In de theorie achter het antwerp wordt het drie fasen net tezamen met de zelfinductiviteiten beschouwd als een synchrone machine. De luchtspleet-flux in deze virtuele machine wordt in het regel- en besturingsalgoritme als referentiegrootheid gebruikt in het veld-georiE!nteerde coordinatensysteem. De gelijkrichter wordt in beginsel geregeld middels een cascade-regeling: een snelle binnenlus draagt zorg voor de sinusvorm en de arbeidsfactor van de netstroom, een trage buitenlus regelt de DC-busspanning. De praktijk noodzaakte echter tot uitbreiding van de regel-algoritmen. Niet-idealiteiten in het systeem maakte de vervanging van de proportionele regelaar door een PID-regelaar in de stroomregeling gewenst; verbetering van de stabiliteit van de gelijkspanning is de drijfveer achter de toepassing van belastings-anticipatie in de regeling van de DC-busspanning. Wei gedefinieerde procedures voor beveiliging en opstarten zijn verder toegevoegd aan het besturingsalgoritme om de gelijkrichter voldoende robuustheid te geven. Van het 10 kVA industrieel prototype zijn door metingen de kwaliteit van de netbelasting bepaald, en de respons van de DC-busspanning op een belastingsstap. Aan de normen voor emissie van harmonische stromen in het net wordt ruimschoots voldaan. Ook blijken de toevoegingen op het regelalgoritme waardevol: in beide gevallen verbetert het gedrag van de gelijkrichter aanzienlijk.
-96-
VAKGROEP INFORMATIE & COMMUNICATIESYSTEMEN
-97-
LEERSTOEL DIGITALE INFORMATIESYSTEMEN
-99-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.E. de Gier Rapport nr: ICS-EB 648 12 juni 1997 Multi-protocol over ATM; The usability of MPOA for PTT Telecom ir. G.P. Buitenhuis (KPN Research Leidschendam) prof.ir. F. van den Dool
Summary:
Current Corporate Networks are beginning to get limited in their capability to keep up with the increasing user requirements. More capacity and flexibility is required within the network in order to support both traditional and new applications, like multimedia applications. Networks are getting bigger and bigger, producing long route-calculation times, and a router-bottleneck is created. ATM is generally seen as one of the network technologies able to fulfil these tasks. For a smooth introduction of ATM into a corporate networks it is essential that ATM can work together with legacy LAN equipment. It is important that computers with legacy LAN interface cards are able to communicate among themselves over an ATM network and that communication is enabled by workstations with ATM adapter cards. Furthermore, ATM workstations should be able to communicate. This can be accomplished using one of the ATM-based interconnection protocols ClassicaiiP over ATM, LAN emulation, Multi-protocol over ATM. Problem definition and purpose ATM has a lot of advantages like high speed, scalability, the power to create virtual LANs, and the ability to support quality-of-service. All of this is positive, but not enough. There has to be a mechanism to integrate ATM into existing networks without having to replace existing networks such as Ethernet, token ring, and TCP/IP, IPX, AppleTalk infrastructures. In January 1994 the Internet Engineering Task Force (IETF) has defined a specification to provide native support of classicaiiP over ATM, abbreviated Clip. This means that networks with IP protocols (not other protocols, like IPX and AppleTalk) can run over ATM. With the same goal of supporting existing network traffic over ATM without modification, LAN emulation, abbreviated Lane, is developed by the ATM-Forum in January 1995. Both specifications, Clip and Lane are partial solutions to the problem to integrate ATM, so there are yet some problems e.g. currently none of the aforementioned specifications define how to leverage the Quality-of-Service capabilities of ATM networks. This is the point where Multi-protocol over ATM (Mpoa) breaks in. Mpoa expands Clip and Lane. In a nutshell, Mpoa does three things. In the first place, it defines a high performance, low latency way to route IP and other protocols across an ATM network. In the second place it enables network managers to build virtual subnetworks that span routed boundaries, so users can be grouped together regardless of there physical location. Finally, Mpoa permits applications to use ATMs Quality-of-Service capabilities. The question the thesis tries to answer is how Mpoa will accomplish these three things. To come to an answer, the functionality of Mpoa has to be sorted out in more detail. The final goal of this is to provide information about the interest for PTT Telecom of Mpoa. Hereby the attention is focused on the following questions: • how does Mpoa work ? • what is the relationship with other protocols or solutions ? • decide when to use Mpoa or when to use other solutions ? • when is Mpoa of interest ? and • what does an implementation guideline look like ?
-100-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
W.M. van Herwijnen Rapport nr: ICS-EB 647 12 juni 1997 Scope 2000; op weg naar een doelarchitectuur voor calling card diensten drs. R. Betlehem {PTT Telecom B.V.) prof.ir. F. van den Dool
Samenvatting:
Het afstudeerverslag behandelt de dienstverlening die PTT Telecom met behulp van calling cards Ievert. Deze dienstverlening is bekend onder de gedeponeerde merknaam scope. De scopediensten stellen gebruikers onder andere in staat vanaf elk willekeurig telefoontoestel in ruim 60 Ianden telefoongesprekken tot stand te brengen zonder dat zij daarvoor contant geld nodig hebben. Teneinde de concurrentiepositie van de scopediensten voor een periode van meerdere jaren te versterken dienen een aantal problemen te worden weggenomen. In dit verslag wordt aangetoond dat hiertoe het dienstenportfolio en de ondersteunende infrastructuur moeten worden herontworpen. Het resultaat van het herontwerpproces, de doelarchitectuur, kan vervolgens als basis voor de migratie van de bestaande situatie dienen. In dit verslag is de eerste fase van het herontwerpproces uitgewerkt en worden de overige ontwerpfase beschreven. De oplossingsmethodiek is bepaald met behulp van probleemanalyse. Deze analyse heeft geresulteerd in de vaststelling van de oorzaken die aan de problemen ten grondslag liggen. Aangezien de concurrentiepositie voor een periode van meerdere jaren moet worden versterkt, dienen deze oorzaken structureel te worden weggenomen. Herontwerp van dienstenportfolio en ondersteunende infrastructuur blijken de basis voor realisatie van deze doelstelling. In het afstudeerverslag wordt aangetoond dat bij het herontwerpproces hulpmiddelen kunnen worden gebruikt voor een doelmatige en doeltreffende totstandkoming van de doelarchitectuur. Hiertoe is de vaardigheid van het ontwerpen geanalyseerd en zijn criteria opgesteld waaraan die hulpmiddelen moeten voldoen. Vervolgens zijn de bestaande ontwerphulpmiddelen ge"inventariseerd en getoetst tegen de selectiecriteria. Voor de doelarchitectuur van de scopediensten van PTT Telecom blijken de Telecommunications Information Networking Architecture (TINA) en de Open lnfrastructuur voor Chipcardtoepassingen (OIC) het meest geschikt. TINA en OIC zijn bij de eerste ontwerpfase gebruikt. De resultaten van deze fase worden in de vorm van Business Models gepresenteerd. Vervolgens wordt de toepassing van deze hulpmiddelen bij de overige ontwerpfasen besproken. Het verslag wordt afgesloten met concrete aanbevelingen voor het vervolgtraject op weg naar een doelarchitectuur voor calling card diensten.
-101-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.J. Prins Rapport nr: ICS-EB 655 28 augustus 1997 Structureren van een IT-beheerorganisatie met ITIL, ontwikkelen van een Service Level Rapportage systeem met behulp van SDM drs. H.L.J.J. Simons (Civility Amsterdam B.V) prof.ir. F. van den Dool
Samenvatting:
Het bedrijfsleven en de overheid zijn in grate mate afhankelijk van de informatiesystemen die zij gebruiken. Door deze afhankelijkheid is de kwaliteit van de informatiesystemen een erg belangrijk onderwerp geworden; er worden steeds zwaardere eisen gesteld aan de kwaliteit. Het controleren van de kwaliteit van de IT-dienstverlening, om zo de afnemers beter te kunnen bedienen, is mogelijk met het kwaliteitssysteem ITIL. De steeds complexer wordende systemen vragen om een goed georganiseerde en uitgeruste beheerorganisatie. Deze beheerorganisatie dient de continuHeit en de kwaliteit van de dienst te waarborgen. Afspraken tussen de IT-dienstverlener en afnemer aangaande de hoeveelheid en kwaliteit van de te leveren IT-dienst worden vastgelegd in service level agreements (SLAs). Het bewaken van deze afspraken gebeurt door het terugkoppelen van de gerealiseerde kwaliteitsniveaus en deze te vergelijken met de norm zoals vastgelegd in het service level agreement. Dit terugkoppelen gebeurt door middel van zogenaamde service level rapporten. Met deze objectieve rapporten moet de klant kunnen bepalen of voldaan is aan de afspraken overeengekomen in het SLA en een eventuele uitbreiding dan wei beperking van de dienstverlening gewenst is. De interne organisatie van de IT -dienstverlener kan deze rapporten gebruiken voor het aansturen van tactische processen die de kwaliteit van de dienstverlening op het gewenste niveau houden of brengen. Het doel van de opdracht was het ontwikkelen van een generiek service level rapportage systeem dat voldoet aan de wensen van de gebruikers en de rapporten kan leveren waar nu of in de nabije toekomst om kan worden gevraagd. De rapporten moeten informatie bevatten over ITIL processen die voor de klant relevant zijn, zoals beschikbaarheidsbeheer (met rapportage items als beschikbaarheid en betrouwbaarheid van een dienst) en capaciteitsbeheer (met rapportage items als gebruik van responstijden van de dienst). Voor de objectiviteit en het gebruiksgemak diende zoveel als mogelijk de functies binnen het rapportageproces geautomatiseerd te worden. Bij het in beeld brengen van de input soorten, de output soorten en de omgeving van het rapporterend informatiesysteem is gebruik gemaakt van het besturingsparadigma. Voor het ontwikkelen van het service level rapportage systeem is de Systems Development Methodology (SDM) toegepast. Deze methode onderkent vier stappen in de ontwikkeling van een informatiesysteem, namelijk: 1. Het opstellen van een informatiemodel, hier komen aan bod de structuur van het bedrijf, de problemen die er spelen en de bijdrage die het systeem dient te leveren aan de oplossing daarvan en onder welke voorwaarden dat moet gebeuren; 2. Het functioneel antwerp, hier komen aan bod de functies die binnen het systeem zijn te onderscheiden en volgens welke regels er interactie plaatsvindt; 3. Het technisch antwerp, dit antwerp bevat alternatieve technische oplossingen, met dezelfde functionaliteit, voor de organisatie van het informatiesysteem; 4. De implementatiefase, verder onder te verdelen in realisatie, testen, conversie en invoering en gebruik en beheer.
-102-
Door de complexiteit van de infrastructuur en de diversiteit van de te leveren rapportage items is het niet mogelijk een van de op dit moment beschikbare standaardpakketten te gebruiken als middel voor het realiseren van aile vereiste functionaliteiten. Voor het verzamelen en opslaan van de benodigde netwerk- en systeemmanagementgegevens zijn wei standaard applicaties te gebruiken (NSM-pakket), aangevuld met zeit te schrijven scripts. Ook het afleveren van de uiteindelijke rapporten op internet, via email of als gedrukt exemplaar bij de klanten kan gerealiseerd worden met behulp van een standaardpakket (rapportage tool). Echter, voor de verwerking van de grote hoeveelheden managementgegevens tot correcte en duidelijke (eenvoudig te interpreteren) rapportage items, zal het systeem moeten beschikken over een snel en intelligent verwerkingsprogramma dat de gegevens opsplitst, selecteert en aggregeert. Tevens moeten scripts geschreven worden voor de te gebruiken standaardpakketten en het koppelvlak tussen de diverse functies. Het verdient aanbevelingen om de kennis van het netwerk, de aangesloten computersystemen en de geleverde diensten, die onder andere is opgedaan tijdens de uitvoering van de opdracht te structureren en samen te voegen tot een overzicht. Met dit overzicht is het voor de beheerorganisatie eenvoudiger in te schatten welke gevolgen storingen of wijzigingen hebben voor de aangesloten klanten.
-103-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
Rapport nr: ICS-EB 653 H.W.H. de Groot 28 augustus 1997 The design of an Analogue Frontend for a DMT based ADSL modem ir. M.V. Arends (Philips Systems Laboratory Eindhoven) ing. C.C.M. Schuur (Philips Systems Laboratory Eindhoven) prof.ir. M.T.M. Segers
Summary:
Afstudeerverslag is vertrouwelijk tot september 1998.
-104-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.J.H. v.d. Heuvel Rapport nr: ICS-EB 649 12juni 1997 LaySiChain: The improved layout-sensitive scan chain inserter ir. P.W.M. Merkus (Philips ED&T Eindhoven) ir. E.J. Marinissen (Philips ED&T Eindhoven) prof.ir. M.T.M. Segers
Summary:
In an era of sub-micron technology, testing for manufacturing defects has become an integral part of the total design and production trajectory of integrated circuits. It is generally accepted that it pays off to take testing into account during the design phase; this is referred to as Design for Testability (DfT). The most popular structured DfT technique is scan design. With scan design, the flip-flops in a circuit are replaced by their scannable variants and are threaded together to form shift registers called scan chains. In test mode, the scan chains greatly improve the controllability and observability of a circuit. The current tool set that Philips Electronic Design and Tools offers to internal customers contains a tool called lnScan which, on the basis of a netlist description of a circuit, is able to insert scan chains into the circuit. This traditional way of inserting scan chains does not take the physical positions of the flipflops in the layout into consideration. It generally introduces long scan chain wires. The testability of the circuit is invariant under all permutations of the flip flops in the scan chains. It was shown that, ordering the flip-flops in a scan chain taking the layout information into account introduces enormous reductions of the total wire length. This reduces the routing area overhead and increases the routability of a design. Although a prototype tool called LaySiChain using this technique showed auspicious results, some serious obstacles are involved like the required Cadence Framework environment and the need for manual interactions. After analysing the problem of creating scan chains with minimum cost, we first focus on achieving a total new design of the LaySiChain functionality. We present a totally new simplified and orderly design flow and a fully automated and user-friendly LaySiChain tool. The Cadence Framework is not required anymore. Experiments on various industrial designs showed an average wire length improvement of 60%. In the second part approximation algorithms towards further improvements of LaySiChain have been presented. It was shown that experiments using this so-called local search algorithms result in wire length improvements of even 35% compared to the wire length obtained by the stripes algorithm.
-105-
.-----------------------------------
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.G.H. Vermeulen Rapport nr: ICS-EB 656 28 augustus 1997 Testing block synchronous digital circuits. Clock system analysis and test scheduling ir. P.W.M. Merkus (Philips Research Laboratory Eindhoven) prof.ir. M.T.M. Segers
Summary:
One of the most popular structured OfT techniques is scan design. If scan design is applied to circuits that have multiple clocks, then testing that design is not as straightforward as testing single clock designs. Data transfers between scan chains clocked by different clock sources may or may not be correct due to the uncertainty in the propagation delays of the clock and data signals. To support block synchronous digital circuits in the CAT flow, the data transfers between the various synchronous blocks, also known as clock domains, have to be detected. From these data transfers, a safe clocking order for the clock sources can be determined to test the entire design. The graduation report describes the required steps to test circuits with multiple clocks; clock system analysis to find the clock sources, data flow analysis to detect the data transfers between the clock domains and test scheduling to generate a test protocol which can be used to test the design. Two standard methods of testing block synchronous digital circuits are described and algorithms are developed to improve these methods with respect to test time and tester memory. A prototype tool named AnaCiock is described which automates the steps mentioned above. Applying the tool to industrial designs has lead on average to a test time reduction of 25% for the standard POST AS and 50% for the standard TMWP approach.
-106-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
Afstudeerhoogleraar:
Rapport nr: ICS-EB 671 P. Coenraads 16 oktober 1997 Functionaliteit toekenning & Domotica-systemen ir. H.G. Rave (W&G Special Products) ing. P. Lems (W&G Special Products) ing. J. van Eerd (W&G Special Products) prof.ir. M.P.J. Stevens
Samenvatting:
Het afstudeerverslag behandelt de afstudeeropdracht die is uitgevoerd bij W&G Special Products. Het betreft het ontwikkelen van een nieuw concept voor een gebouw besturingssysteem voor de W&G groep. Het ontwikkelen van dit concept is noodzakelijk omdat de omzet van gebouwbesturingsprodukten moet toenemen. Door een nieuw concept te ontwikkelen moeten een aantal probleempunten van het huidige systeem worden weggenomen. Tevens moeten een aantal nieuwe markten worden aangeboord door een nieuw concept te ontwikkelen. Om aan aile gestelde eisen en wensen te voldoen, kunnen we niet volstaan met een concept, maar moeten we de problemen opdelen in groepen en voor iedere groep een concept ontwikkelen. Deze drie concepten zijn: • Ontwikkel een 'Plug & play' module • Ontwikkel een 'Minimaster' module • Ontwikkel een 'Software tool' De Plug & play module is zoals de naam al zegt, een module waarop men geen of weinig instellingen hoeft te maken voordat deze kan werken. Het huidige systeem is namelijk erg ingewikkeld qua functionaliteitstoekenning en zal dus niet gemakkelijk verkocht worden aan installatiebedrijven die geen ervaring hebben met domotica. Het installeren van een systeem is te ingewikkeld. De Plug & play module lost dit probleem op, door de installateur op een simpele wijze een instelling van functie en installatieruimte per module te Iaten instellen. De Minimaster is een device, dat met dezelfde basisfunctionaliteit werkt als de Plug & play module. Het device is echter uitgebreid met een aantal drukknoppen en een display. Door deze uitbreiding is het mogelijk dat de gebruiker een gedeelte van de basisfunctionaliteit kan wijzigen. Bovendien worden door het aanwezig zijn van deze userinterface een aantal extra gebouw besturingsfuncties mogelijk. De Software tool moet zorgen voor een snellere functionaliteitstoekenning van het huidige gebouw besturingssysteem. Zo kan het programmeren van een systeem op een veel simpelere manier gebeuren. Bovendien is de tool in staat om met de gegeven input een project documentatie te genereren. De Plug & play module is al reeds in een prototype verschenen. De andere produkten zijn nog niet ontwikkeld. De software tool zal echter in de nabije toekomst wei gemaakt worden. Van de minimaster is nog niet duidelijk of dit produkt binnenkort gemaakt gaat worden.
-107-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
C.F.L. David Rapport nr: ICS-EB 651 12 juni 1997 Design of an intelligent actuator-controller using Profibus-DP ir. E.P.M. Bakker (EIIips B.V. Eindhoven) ir. J.P.C.F.H. Smeets (EIIips B.V. Eindhoven) prof.ir. M.P.J. Stevens
Summary:
At the firm of Ellips B.V. in Eindhoven one is developing several vision-systems for a range of industrial applications. One of those systems is a fruit-grading system. A fruit-grading system has the basic function of sorting several sorts of fruit like apples, pears, peaches and kiwis. Fruit can be sorted (or better: graded) on size, weight and flush. In the near future it should be possible to grade on quality too. At the moment of writing this document, a lot of those machines were sold to several firms in Europe. The main problem of the current machines is that they are physically very large, because of a lot of grading-exits and more transporting lines driven parallel. The length of a line can be up to 100 meters long and with more machines to control a total length of more than 200 meters can be achieved. To control all relays, sensors and weight-devices hundreds of meters of cable are needed. It is not just that a lot of cable is needed (cost-reasons), but also that search for errors in a defect system is difficult because of a lack of a well-designed structure in the system. The development of new industrial fieldbuses makes it also necessary to introduce a new platform for the fruit-grading-systems. At Ellips B.V. one decided to use Profibus as the industrial fieldbus. Profibus stands for PROcess FleldBUS and is now widely used in the industry. With Profibus, intelligence is distributed over the system and the processor-load of the main system therefore will be reduced. Profibus is based on the seven layer OSI-model and the transmission-protocol is based on the RS-485 standard. This makes the Profibus very suitable for high speed transmissions up to 12 MbiUs in noisy environments. The main assignment was to implement an intelligent actuator-controller into a Profibus-DP-system. DP means Decentralized Periphery and is one of the three possible Profibus protocols. Profibus-DP is designed for very high speed transmission in less complex applications as I/O-controllers like actuator/sensor-controllers. In these applications the number of data-exchange- and diagnosticsbytes per access is less than more complex applications where diagnostics are more important and therefore introducing a bigger overhead and for that reason a lower effective transmission-rate. The actuator-controller is designed around a microcontroller 80C32 based on the well-known family 8051-microcontrollers. In order to match with the high transmission-speed of 12 MbiUs an ASIC (Application Specific Integrated Circuit) SPC3 from Siemens is used to perform almost the complete communication- protocol to relief the processor-time of the microcontroller. In order to communicate with the SPC3, communication-software had to be bought from Siemens. This firmware should be implemented on the microcontroller and be expanded with self-defined routines to perform the functions of the actuator-controller. The high-level-flow for the software was given. The functions of the actuator-controller are to control 16 different relays and perform several diagnostics like overvoltage-detection and short-circuit-detection. Finally, recommendations are given about the sensor/encoder-controller and the weight-machine.
-108-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
Afstudeerhoogleraar:
R.D.H.J. Faber Rapport nr: ICS-EB 643 24 april 1997 Distributed Java/Smalltalk framework; Een framework ter ondersteuning van gedistribueerde Java/Smalltalk applicaties ir. S. v.d. Kuilen (ELC Object Technologie B.V., Capelle a/d IJssel) ir. M.J.M. van Weert prof.ir. M.P.J. Stevens
Samenvatting:
Gedistribueerde systemen bestaan al sinds er computernetwerken zijn. "Gedistribueerde systemen" is een breed begrip. De computers in zo'n gedistribueerd systeem kunnen gebruik maken van elkaars diensten. Met een gedistribueerd systeem in object-geori~nteerde programmeertalen wordt bedoeld dat de objecten van de ene computer gebruik kunnen maken van de objecten op een andere computer. Zowel Java als Smalltalk zijn object-geori~nteerde programmeertalen. Een Java applicatie kan worden toegevoegd aan een internetpagina en is verplaatsbaar over een computernetwerk. Een gedistribueerde Java/Smalltalk applicatie is een applicatie waarbij de Java objecten gebruik maken van Smalltalk objecten, die zich op een fysiek andere locatie bevinden. Het "Distributed Java/Smalltalk framework" bestaat uit componenten die het eenvoudig maken om zo'n gedistribueerde Java/Smalltalk applicatie te ontwikkelen. Hiervoor zijn drie componenten ontwikkeld. De eerste component is een Smalltalkobjectserver die aanvragen van diensten verwerkt. De tweede component is de Javaproxycommunicator, die ervoor zorgt dat de aanvragen van diensten bij de server terechtkomen en de resultaten hiervan verwerkt worden. De laatste component is een Javaproxygenerator die proxyobjecten genereert van de Smalltalk objecten waarvan de diensten gebruikt moeten gaan worden. Door de proxyobjecten is het mogelijk op een gestructureerde manier van diensten van objecten op een andere computer gebruik te maken. Met behulp van dit distributed Java/Smalltalk framework is het eenvoudig om gedistribueerde Java/Smalltalk applicaties te ontwikkelen. De ontwikkelaar kan zich concentreren op het ontwikkelen van de functionaliteit van de gedistribueerde applicatie. Hierbij hoeft hij zich geen zorgen te maken over de manier waarop de communicatie tot stand komt tussen enerzijds het deelsysteem in Java en anderzijds het deelsysteem in Smalltalk. Het distributed Java/Smalltalk framework zal voornamelijk toegepast kunnen worden in omgevingen waar al Smalltalk applicaties aanwezig zijn. Met dit framework kunnen bepaalde deelsystemen op eenvoudige wijze beschikbaar worden gemaakt voor het internet. De huidige infrastructuur van het internet stelt echter wei zijn beperkingen aan de complexiteit van de gedistribueerde applicaties. Dit heeft twee redenen. Enerzijds zal een complexe applicatie grater van omvang zijn en zal het daarom Ianger duren voordat de Java applicatie via het internet is getransporteerd. Anderzijds zal een complexe applicatie meer communiceren met de Smalltalk server en zal daardoor oak trager worden.
-109-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
S.A.H. van Hoof Rapport nr: ICS-EB 650 12 juni 1997 SHE as simulation tool for packet switch fabrics ir. M.J.M. van Weert prof.ir. M.P.J. Stevens
Summary:
Today's and future services ask for a broadband network. This network must be flexible and fast. ATM (Asynchronous Transfer Mode) is such a network. ATM information is transported by fixed sized packets called cells. Packet switches route these cells from source to destination. The transport of cells from packet switch input to packet switch output is called switching. Multiple switches together form a packet switch fabric. Simulations can be done in order to have an indication of the performance of such a packet switch fabric. These simulations must be done by means of a computer. This asks for the principle of modeling. There are many ways to create a model. This thesis describes the method SHE used for modeling packet switch fabrics. The SHE method is a new Object Oriented specification method developed at the Eindhoven University of Technology. SHE supports formal and informal modeling. Non-formal modeling is necessary for a good understanding of the problem and for a smooth path towards a formal model. A formal model in SHE consists of a behavior description in the language POOSL. The SHE method comes with a special tool which is able to simulate the POOSL behavior descriptions of the packet switch fabrics. A 2 input 2 output (2m2) packet switch and a 4 input 4 output (4m4) packet switch fabric are modeled according to the SHE method and fed to the SHE simulation tool. During packet switch (fabric) simulations the cell delay and the switch buffer length are measured. These simulation results are displayed in graphics. Further, the performance of the simulation tool itself is tested. The conclusion is that the SHE method is a very nice and adequate instrument for modeling packet switch fabrics, but at this moment the SHE simulation tool is found not sufficient to do simulations for large packet switch fabric models.
-110-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
Rapport nr: ICS-EB 663 H.A.J. Kester 16 oktober 1997 An intelligent weight controller using Profibus ir. J.P.C.F.H. Smeets (EIIips B.V. Eindhoven) prof.ir. M.P.J. Stevens
Summary:
The design of a weight controller that is part of a new fruit grading system will be discussed in the graduation report. The weight controller must be able to measure fruit that is passing by in cups with a precision of one gram. The weight sensor is a standard load cell that uses the Wheatstone bridge principle. Up to eight lanes of cups must be processed, with speeds of twenty cups per second per lane. The results of the weight measurement must be sent to a master computer over a Profibus (Process Fieldbus) connection using the Profibus protocol. Furthermore, the weight controller must have the ability to update its software via the same Profibus channel. The weight controller will be composed with the following components: a bridge excitation circuit a bridge amplifier a multiplexed AID-converter a Digital Signal Processor an 80C32 host-processor a Profibus interface an FPGA Two weight methods will be described: filtering the bridge signal with a FIR-filter and averaging ten filtered samples extracting the weight from the damping ratio and oscillation frequency of the bridge signal. The designed weight controller is suitable for the new fruit grading system, although it still has to be tested in practice. The "averaging" weight methods works on an Agra machine with a precision of 2 grams. Better mechanical behaviour of the machine might improve this precision. The method using damping ratio and oscillation frequency still has to be tested. For future versions of the weight controller it is desirable to implement Profibus-DP and ProfibusDPE to ensure proper operation in other Profibus-DP networks. Furthermore, the two weight methods must be tested on data from a different machine than one from Agra.
-111-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
Afstudeerhoogleraar:
M.L. de Leijer 28 augustus 1997 POOSL-Compiler for Smalltalk and C++ ir. M.C.W. Geilen dr.ing. P.H.A. van der Putten dr.ir. J.P.M. Voeten prof.ir. M.P.J. Stevens
Rapport nr: ICS-EB 657
Summary:
To support development of complex information-technology systems, adequate analysis, specification and design methods are required. At the section of Information and Communication Systems of the Faculty of Electrical Engineering at the Eindhoven University of Technology, development of these methods is subject of active research. This research resulted in the method SHE (Software/Hardware Engineering) [PV97][PVS95]. SHE is an object-oriented method for co-development of complex reactive software/hardware systems, covering analysis, specification and design. SHE incorporates the formal specification language POOSL (Parallel Object-Oriented Specification Language) [Voe95a][Voe95b][PV97]. To create a useful environment for developing complex systems with SHE, software tools that support the specification process are indispensable. Because of the formal syntax and semantics of POOSL, SHE has the potentials for formal verification, transformation, simulation and implementation. POOSL specifications have to be translated to target code in order to be able to perform simulations and implementations. This thesis describes the design and implementation of a compiler for POOSL, which is able to translate POOSL specifications to Smalltalk [PPS92] and to C++ [Str92]. The POOSL-compiler is implemented in the Smalltalk programming environment. The following results of other projects are used as a starting-point for the implementation of the POOSL-compiler: The syntax and grammar of POOSL formulated in [Kup96] The mapping of POOSL on Smalltalk formulated by Marc Geilen The mapping of POOSL on C++ formulated in [Tet97]. The POOSL syntax and grammar are modified for semantic and syntactical reasons. The new syntax and grammar of POOSL are formulated in Appendix A and B. The mapping on Smalltalk and C++ are given in Appendix C and D. The Smalltalk and C++ code that has to be generated can be seen as an intermediate code, because in both cases supporting target code is required to implement the semantics of POOSL. This support is implemented by the POOSL-simulator (by Marc Geilen) and the C++ library (by [Tet97]). Because the POOSL-simulator holds the POOSL classes of a system only translation of methods is required. The POOSL-compiler is not (yet) able to check the context conditions of POOSL, so they must be satisfied by the user.
-112-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
X. Lin Rapport nr: ICS-EB 666 16 oktober 1997 Template file for IDaSS to HDL-Verilog generation dr.ir. A.C. Verschueren prof.ir. M.P.J. Stevens
Summary:
The graduation report describes an implementation of a converter for IDaSS (Interactive Design and Simulation System) to Hardware Description Language Verilog. With IDaSS a digital system can be designed and simulated interactively at Register Transfer Level or higher level languages. With a Hardware Description Language, a real chip layout of the digital system can be generated. The converter consists of Verilog language optimized conversion instructions. The file generated by the converter will be the input file for a Verilog simulator or silicion compiler. The latter can generate files for manufacturing chips. The complete I DaSS system will consist of several interconnected tools. Different tools have been implemented successfully. The implementation details of the Verilog converter, Expressions, Unary Operators and Binary Operators are the main subject of the graduation report.
-113-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P. P. E. Meuwissen Rapport nr: ICS-EB 672 16 oktober 1997 Designing and simulating a communication mechanism for a multi TriMedia system. ir. H.W. van Dijk (Philips Semiconductors - SLE) ir. J. Koene (Philips Semiconductors - SLE) prof.ir. M.P.J. Stevens
Summary:
TM-1 is the first in a family of programmable multimedia processors from the TriMedia product group of Philips Semiconductors. This "C" programmable processor has a high performance 32-bit Very Large Instruction Word (VLIW)-DSPCPU core with video and audio peripheral units designed to support popular multimedia applications like MPEG1 and MPEG2 decoding, MPEG1 encoding, 3-D graphics, V.34 modem and H.320/H.324 videoconferencing. For some very demanding applications like MPEG2 encoding, however, the processing power of several TM-1 chips is needed. This report describes the design of a PCI-based communication mechanism for a multi TriMedia system based upon standard IREF boards. This communication mechanism consists of software that uses the standard pSOS+ real time operating system on each of the TriMedia nodes to allocate system resources and to perform local synchronisation or message passing operations. The software implements global semaphores and global message queues, which is sufficient to support synchronisation of and message passing between tasks running on different TriMedia boards. I used formal methods to derive the semaphore mechanism, which ensures that it is reliable. Because of problems with the availability of the Windows 95 drivers required to download and start programs on several IREF boards at the same time in our PC-based reference design, we have not been able to test the communication mechanism in a real multi TriMedia system. In stead I have written an extensive simulation program in C++ to test the reliability of the communication mechanism. Because of the object oriented structure of the program, it is very easy to change the architecture of the simulated system. No deadlocks occurred during any of the simulations, which supports our assertion that the communication mechanism is reliable. From the simulation-results I also derived a few rules of thumb for finding a good distribution of tasks over the available CPU's. The simulation also enabled us to analyse the performance of the communication mechanism and determine the influence of external sources like the protocol chosen for Pel-arbitration and waitstates on the PCI-bus. Although the communication mechanism couldn't be tested on a multi TriMedia system, I did test it on a single TriMedia IREF board. This test was successful, so we can conclude that all the parts of the communication mechanism, I estimate that the system can be up and running within a week as soon as the required Windows 95 drivers are available.
-114-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R. Michielsen Rapport nr: ICS-EB 661/662 16 oktober 1997 Modelling IDaSS elements in POOSL and Implementing POOSL in IDaSS dr.ing. P.A.H. van der Putten, dr.ir. J.P.M. Voeten prof.ir. M.P.J. Stevens
Summary:
Software/Hardware Engineering (SHE) is a new object-oriented method for the co-specification and design of complex reactive hardware/software systems. SHE incorporates a framework for design activities, and a formal description language called POOSL (Parallel Object-Oriented Specification Language). Starting from informal object-oriented analysis, SHE produces rigorous system-level behaviour and architecture descriptions expressed in the POOSL language. This thesis describes the exploration of the path from IDaSS (Interactive Design and Simulation System) towards POOSL. It addresses the modelling of IDaSS elements in POOSL on the basis of two IDaSS designs. The first IDaSS design uses one Algorithmic Level block to describe an 8048 microprocessor. The second design describes the same microprocessor by means of Register Transfer Level blocks. It will be fairly easy to convert IDaSS designs towards POOSL if we can make a general POOSL specification of these RTL blocks. This way we will obtain a very suitable environment for the co-simulation of hardware/software systems. The second part describes a first exploration of the path from a specification in POOSL towards an implementation in hardware. As we will mainly focus on the communication between process objects, some restrictions are applied to the POOSL model. Only FSM-Iike descriptions are allowed and the data part of POOSL will be almost completely left out. A scheduler will be implemented in hardware to handle process communication. The design and simulation tool IDaSS (Interactive Design and Simulation System) is used for the design of the hardware implementations.
-115-
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Begeleiding: Afstudeerhoogleraar:
G.M.L. Notten Rapport nr: ICS-EB 644 24 april 1997 Testcases for the PrizmaPius Switch dr.ir. M.C.A.A. Heddes (IBM ZOrich Research Laboratory) ir. M.J.M. van Weert prof.ir. M.P.J. Stevens
Summary:
PrizmaPius is a high speed ATM switch chip developed at IBM ZOrich Research Laboratory, Switzerland. For testing this chip, testcases have been written. An architectural expansion in the chip, called Control Packet Handling, had to be tested. New testing features had to be added for this purpose, including a microprocessor interface, to and from which control packets can be sent. A Read and a Write Control Packet Testcase have been written. Furthermore, two testcases have been written for checking the Link Parallelling functionalities at the outputs.
-116-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.P.A.A. Peeters Rapport nr: ICS-EB 660 28 augustus 1997 The Development of a Video Server for Video-on-Demand ir. M.J.M. van Weert prof.ir. M.P.J. Stevens
Summary:
The Master's thesis gives a description of a video server for Video-on-Demand (VoD) services. A customer using a VoD service, selects a video from a large repository of videos and is able to interactively change the playout sequence by using the VCR functions. VoD may have a large potential growth market, but there are still some improvements needed before it will become commercially available. A performance objective of developing a VoD service is therefore to support the maximum number of concurrent users with an acceptable quality of service. The description of the VoD system is done by dividing the VoD into the three parts called the video server, network and client. Each of these parts is worked out separately in the graduation thesis. Because the video server part is emphasized here, the server is divided into a subsystem where the videos are stored and retrieved, a processing part for the retrieved video frames, and client interactions and a networking part to send the video frames over the network. A very efficient way to support more concurrent accesses ·is the capability of the server and network to share resources between users in order to lower the total number of concurrent streams. A multicasting scheme is presented to provide multiple users with the same stream. In case a stream cannot be shared between users, because some users perform VCR actions, bandwidth has to be reserved in order to overcome the extra need for bandwidth. Reserving resources like this means that a trade-off is made between the maximum number of concurrent users allowed at the server and the quality of service delivered. Mechanisms are presented which minimize the amount of reserved resources needed and still meeting the requirements. A scheme is presented for the storage and retrieval of video frames in an efficient way such that the number of users in a system with VCR functions is just a little less than in a system with just PLAY, PAUSE and STOP.
-117-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.A.G. van Pinxteren Rapport nr: ICS-EB 673 16 oktober 1997 Decomposition of Sequential Machines: A Heuristic Algorithm for Macromolecule Packaging dr.ir. L. J6zwiak prof.ir. M.P.J. Stevens
Summary:
Programmable Logic Arrays with Memory (PLAMs) can be used for implementation of sequential machines. The graduation report describes the implementation of a part of a method for converting a transition-table of a sequential machine into a network of (two or more) PLAMs. Finding a solution with an absolute minimum number of PLAM's and interconnections between them is an NP-complete problem. Therefore, the method only tries to find a solution that is as close to the minimum as possible, within a reasonable time. In order to accomplish this task it uses a technique called 'decomposition'. This technique decomposes (or splits) the original transition-table into a number of small parts (called macro-molecules). Then some heuristic is used to form from these small parts a limited number of larger parts (called molecule-blocks) such that each of them fits into a single PLAM and there is a minimum number of interconnections between the parts. These larger parts are then converted into PLAM-programs that together form a PLAM-network that implements the original transition-table. The process of forming these larger parts is called packaging. The graduation report describes a packaging method that is based on the "beam-search" algorithm. This algorithm tries to find a solution by stepping through a limited number of 'paths'. A 'path' is formed by subsequently moving each macro-molecule into one of the molecule-blocks. A single step consists of selecting a limited number of promising moves for each partial solution. These moves are then applied to this partial solution, thus forming a number of new (partial) solutions. From these resulting partial solutions a limited number is chosen for the next step. The report also describes the main data-structure of the program, the "MoleculeSuperSet". This data structure represents the entire transition-table. The MoleculeSuperSet is a set of MoleculeSet's (or macromolecules) that represent some subsets of the transition-table. A MoleculeSet in turn consists of a number of Molecules that represent small parts of the transition-table with the same current state and input-vector. This part of the program has been succesfully implemented and tested. There are no known bugs left, however, there are some data-structures that should be optimized in order to improve performance. The results of the program are very likely to be improved by further research on the parameters which were used during the packaging-process.
-118-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
E.A.J. Reuter Rapport nr: ICS-EB 659 28 augustus 1997 Specification of a Distributed Real-time Reactive System, using the Software/Hardware Engineering method dr.ing. P.H.A. van der Putten dr.ir. J.P.M. Voeten prof.ir. M.P.J. Stevens
Summary
The thesis describes the specification of next generation BOhrs personalised mailing machines using the Software/Hardware Engineering method. A mailing machine creates mailing packets by stacking mailing items on each other at a transporter. Current mailing machine architecture and implementation of control software restrict integration of new features. In a co-operative project of BOhrs Zaandam B.V., Eindhoven University of Technology, and TNO section Applied Physics a new architecture and control is specified. For the Eindhoven University of Technology the project is an interesting case to put the Software/Hardware Method in practice. By the specification experiences the method is verified, extended and improved. Next generation mailing machine requirements and technologies to be used are specified. Different conceptual solutions have been explored, only the final conceptual solution is described. Current mailing machine architecture is explored to find restrictions that make new requirements hard to implement on current mailing machines. With the identified problems in mind a new architecture is developed. Main new features in the architecture are distributed control and data and separation between control and configuration. To transport mailing items in the mailing machine, a model is specified that handles all transporter layouts. Further, a rotary feeder is specified. Sensors and actuators of transporters and functions are identified and modelled. Distinctive behaviour of the mailing machine is identified and specified in scenarios. In a system level analysis for different scenarios collaborating objects are specified guided by a framework offered by SHE. The framework incorporates different graphical representations in which collaborating objects, the interactions between the objects, relations between objects and structure have been visualised. In a sub-level system analysis the internal behaviour of transporter and functions is specified. Finally, functional behaviour and structure are formalised by description in the formal language named POOSL (Parallel Object Oriented Specification Language). The different models are validated in a simulator and behaved as specified. BOhrs provides a prototype mailing machine with the proposed architecture to verify the specification. An implementation of the specification must be realised to implement the specification on the prototype. The project is continued by a graduate student.
-119-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.P. van Schaik Rapport nr: ICS-EB 654 28 augustus 1997 Technological developments and the influence on networks and services ir. M.J.M. van Weert prof.ir. M.P.J. Stevens
Summary:
The graduation report focuses on broadband networks. Lately a lot of research and financial effort has been put in the development of types of broadband infrastructures. These broadband infrastructures can be used to deliver many different types of services. These services will contain some videorelated services which will put some constraint on the infrastructure and will need some kind of compression. The graduation report bundles the information that has been found and the ideas the author has created. Because of the enormous amount of information a hypertext-like architecture would be the only readable way to make a report. This resulted in the idea of creating a real web-site. This website has been set up, and the graduation report is a reflection of that web-site. The web-site can be found at: http://www. eb. ele. tue. nl/biolschaiklindex. htm. The different chapters in this report are copies of the pages on the web-site. The different subchapters are separate pages and hyperlinked to the main chapter. This design has been continued in the paper version of the report. The sub-chapters can be recognized by the text: (see separate chapter) in a sentence. This sub-chapter can be found directly after the chapter the reference is in. Chaper Two describes the compression techniques for video signals. This compression can take place in various ways. The most important compression standard is the MPEG standard. This MPEG standard seems to be the de-facto standard for compression, but still has some drawbacks. Three types of processing coded bit-streams are described in this chapter. Chapter Three deels with the infrastructure for the delivery of broadband networks. It first describes the existing kinds of infrastructures with their basic characteristics. The requirements of an optimal broadband network are then compared to the new network types that arise by upgrading the existing network. Chapter Four describes the different broadband services and the requirements the services demand from the infrastructure. Finally, in chapter five, the general conclusions will be drawn.
-120-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.A.M.J. Snijders Rapportnr.: ICS-EB 670 16 oktober 1997 The design and implementation of a pipelined 8051 microcontroller core. dr.ir. A.C. Verschueren prof.ir. M.P.J. Stevens
Summary:
This master's thesis describes the pipelining of the 8051 microcontroller core. The reason for this assignment is the CAN controller, which is being developed at the section of Information and Communication Systems. This CAN controller consists of three processors of which one is the application processor. This application processor will be a fast 8051 based microcontroller core. The speed makes it necessary to use the pipelining technique. The idea of pipelining is to split up the basic function of the microcontroller in more subfunctions. For each subfunction a hardware module is designed, which is called a stage. These stages can operate independent of each other and form a pipeline through which a continuous instruction stream flows. Inter-instruction dependencies prevent the microcontroller core from running at its optimal speed. The functionality of the 8051 core is split up in the following stages: -
Instruction Addressing Instruction Receiving Instruction Decoding Operand Addressing Operand Receiving Instruction Execution Operand Write Back
Necessary mechanisms are designed to handle the inter-instruction dependencies, reduce the amount of used hardware and increase the performance. Techniques like Early Branch Calculation and Data Forwarding are used. The design is implemented and tested an can run at a clock frequency up to 40MHz. The performance of the pipelined 8051 is approximately a factor 10 better than the standard running at 25 MHz. Some extensions, reductions and improvements of the system are suggested.
-121-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
Afstudeerhoogleraar:
J.N. van Tetrode 28 augustus 1997 Implementing POOSL in C++ ir. M.C.W. Geilen dr.ing. P.H.A. van der Putten dr.ir. J.P.M. Voeten prof.ir. M.P.J. Stevens
Rapport nr: ICS-EB 658
Summary:
The method Software/Hardware Engineering (SHE) offers a formal language called POOSL (Parallel Object-Oriented Specification Language). This thesis decribes a method for implementing POOSL in C++. The method incorporates a set of translation-rules and a C++ POOSL library that contains the required functionality to implement POOSL in C++. Implementation of the data part of POOSL requires a garbage collector for destroying data objects that are not needed anymore. This is because POOSL's new statement that is used to dynamically create data objects has no counterpart for object deletion. For this reason, the POOSL library is supplied with a garbage collector. The applied garbage collection technique is reference counting. Implementation of POOSL's process part is not straightforward. This is because of the synchronous inter-process communication in combination with the select, abort and interrupt statements. A process cannot decide for itself with which process it is able to communicate, since this depends on the decisions of possible communication partner processes. To solve this problem efficiently, a scheduler is used for arbitration. The interrupt and abort statements allow several statements to be active simultaneously, each within its own (local) environment. Whether an active communication statement is executable or not, depends on the communication partners. The scheduler's task is to choose environments that have an executable statement and to give these environments permission to execute it. Therefore, prior to executing its active statement, an environment must submit a request and wait for the scheduler to grant it. The implementation method presented in the thesis supports all POOSL statements including the delay and broadcast extensions. By combining this work with another master's project, namely the construction of a POOSL compiler, a complete tool has been developed for automatic translation of POOSL specifications into C++.
-122-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleid ing: Afstudeerhoogleraar:
A.M.A. Wouters Rapport nr: ICS-EB 646 12 juni 1997 De PC-based switch; Ontwerp en Realisatie dr.ir. J.H.G. van Pol (KPN Research) ir. M.J.M. van Weert prof.ir. M.P.J. Stevens
Samenvatting:
De telefoon is tegenwoordig niet meer weg te denken als communicatiemiddel. Ook binnen bedrijven is dit het geval, waarbij in de bedrijfsomgeving ook een aantal speciale functies, zoals verkort bellen, gemeen goed zijn geworden. Deze functies zijn mogelijk doordat bedrijven hun eigen telefooncentrale (PBX) hebben waarbij deze functies onafhankelijk van het publieke net gerealiseerd kunnen worden. Met de opkomst van de informatietechnologie, wordt ook de link hiervan met de telecommunicatie interessant. Denk bijvoorbeeld aan bellen vanuit een telefoonlijst. Een nieuwe ontwikkeling op het gebied van Computer Telephony is de PC-based switch (PCBX). Bij een PCBX is de PBX volledig geTntegreerd in de PC. De ontwikkeling van PCBX'en is in een dusdanig stadium dat de PCBX gezien kan worden als een alternatief voor de PBX. Probleemstelling
Binnen KPN is er weinig kennis met betrekking tot PCBX'en. De technische ontwikkelingen en marktontwikkelingen rond PBX'en duiden er echter opdat in de nabije toekomst de PCBX als alternatief voor de PBX gezien kan worden. Het is voor KPN dan ook van belang kennis, inzicht en ervaring te krijgen in de ontwikkeling en mogelijkheden van PCBX'en. Doe I Binnen het project zijn de volgende doelen vastgesteld: Het ontwerpen en realiseren van een PCBX om zo de mogelijkheden en onmogelijkheden van een PCBX te verkennen. Het realiseren van een PCBX voor demonstratiedoeleinden; Het opleveren van een rapport waarin de ontwikkeling en de gebruikte concepten van de PCBX gedocumenteerd worden. Conclusies
De PCBX bevat een grote flexibiliteit. Dit kot enerzijds door het gebruik van de SCSA standaard, die een open systeem garandeert. Anderzijds door gebruik te maken van object-georienteerde software. Deze eigenschappen maken het mogelijk om op relatief eenvoudige wijze, en dus binnen korte tijd, de capaciteit van het systeem aan te passen en applicaties en features aan het systeem toe te voegen. Aanbevelingen
Om de voordelen en mogelijkheden van PCBX'en duidelijker naar voren te Iaten komen, verdient het aanbeveling om nieuwe features voor de PCBX te ontwikkelen. Hierin kunnen persoonsspecifieke voorkeuren en eigenschappen gebruikt worden. Tevens is het interessant om bestaande CTIapplicaties te koppelen aan de PCBX.
-123-
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Begeleiding: Afstudeerhoogleraar:
A Ventevogel Rapport nr. EB 641 13 februari 1997 An Intelligent Peripheral in the Intelligent Network prof.ir. J. de Stigter ir. J. van der Meer (Ericsson Rijen) prof.ir. J. de Stigter
Summary
Intelligent Networks enhance the existing telephony network with possibilities to provide a variety of services to the users. Especially in these days of competition introduced in the network, and of fast evolving technologies, this gives operators the opportunity to distinct from others, and thus become more interesting for customers. Ericsson Telecommunications is a leading company on the area of Intelligent Networks. The Intelligent Networks Application Laboratory (INAL}, part of the R&D division of Ericsson Telecommunicatie B.V. in Rijen, is instituted to make full profit out of the opportunities provided by the Intelligent Network concept. One of the projects in the INAL, the Rapid Service Prototyping (RSP) tool is a model of the real Ericsson Intelligent Network. It provides Ericsson a means to get insight in the consequences and demands of the introduction of new services in the Intelligent Network on the nodes in the network and on the network capabilities. Also, the RSP-tool makes it possible to test new ideas within a very short time. However, the RSP-tool is not complete yet. One of the missing parts is a physical entity that contains specialized resources in the network, the Intelligent Peripheral (IP). This Intelligent Peripheral takes care of specialized tasks in the network, such as recording and playing of speech messages. To get the full potential out of the Intelligent Network capabilities, the Intelligent Peripheral is a very important node. In this document I provide an overview of the research to such an Intelligent Peripheral, especially aimed on the wish to extend the RSP-tool with such a node. First, the research concentrates on the functional aspects of an Intelligent Peripheral. The result of this research is a set of capabilities, which the IP should at least provide. The capabilities of the prototype, which will be developed, will be used in a service, that is also described in the report. An important issue is the distribution of logic in the Intelligent Network, and this will be described extensively. The research also covers the architecture of an Intelligent Peripheral, dealing with aspects like scalability and modularity. The research results in a proposal for the architecture of the prototype of the Intelligent Peripheral, which has to interwork in the RSP-tool. Furthermore, research to the interfaces of an Intelligent Peripheral is needed. To integrate the Intelligent Peripheral in the existing telephony network, two options are described, and one of the options is selected to be used in the prototype in the RSP-tool. Before the results of the research are used to implement the prototype, the report describes a formal description of the system. The Specification and Description Language, SDL, is used to describe and explain the breakdown of the prototype in four functional blocks. It also describes the processes in the prototype on a detailed level. Finally, the report describes the implementation issues of the prototype Intelligent Peripheral. In this implementation, the results of the research are used as a guideline. The report describes the hardware and software architectures, needed to provide the prototype with the demanded capabilities and interfaces to the RSP-tool in the Intelligent Network Application Laboratory.
-124-
LEERSTOEL AUTOMATISCH SYSTEEM ONTWERPEN
-125-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.H.M. Kicken 12 juni 1997 Implementing the ADRC algorithm within an MPEG-2 video decoder. dr.ir. J.A.A.M. van den Hurk prof.ir.ing. J.A.G. Jess
Summary:
In this thesis the ADRC (Adaptive Dynamic Range Coding) technique is successfully implemented in an MPEG (Motion Picture Expert Group) video decoder, this resulting in a memory requirement of less than 16 MBit of memory components. MPEG-2 video decoders demand over 16 MBit memory. Within Philips a goal was set to implement a video decoder that doesn't exceed the 16 Mbit border. The idea was to gain memory usage for other services of MPEG-2, such as audio and graphics, by implementing video compression on the video data. The ADRC technique has been evaluated to implement in an MPEG-2 video decoder. This ADRC technique is specially developed for video data reduction, and makes use of the spatial correlation within a video signal. With the ADRC technique it is possible to reduce the memory usage of video images by a factor two, without affecting the subjective picture quality. By implementing this compression factor 2 less (expensive) memory is needed which makes the ADRC algorithm economically very interesting for use within MPEG-2 video decoders.
-126-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R. Oome
13 februari 1997 Forward Error Correction in the ETSI Standard for Digital Terrestrial Television prof.dr.ing. J.A.G. Jess prof.dr.ing. J.A.G. Jess
Summary
In the ETSI-standard for digital terrestrial television, compatible with MPEG-2 coded TV signals, a baseline transmission system is specified for channel coding/modulation. For Forward Error Correction (FEC) at the transmitter, a concatenated code is applied, consisting of a shortened Reed-Solomon code RS(n=204, k=188, t=8), and a variable-rate punctured convolutional code with constraint length 7. Performance is further improved by a convolutional Outer lnterleaver (Forney approach), and a block-based Inner lnterleaver. In this paper, the signal processing for error correction at the receiver side is investigated, leading to a specification/algorithms for the different decoder-blocks, modeled in the C programming language, and some architecture considerations for a hardware implementation, that all meet the global performance requirements. The performances of the codes are investigated to find optimal decoder settings. The Reed-Solomon decoder is modeled, using the two most practically applied decoding-algorithms of this moment: the Euclidean algorithm and the Berlekamp-Massey algorithm. The convolutional code is decoded using a Viterbi-algorithm, where test-results show that soft-decision must be used to meet the performance requirements. Soft-decision is investigated up to 7 bit per coded bit, with special attention to quantization-borders in the different Gray-mapped OFDM constellations. Test-results show that a 3-bit soft-decision per coded bit suffices to give a satisfying performance.
-127-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
E.H.M. Wilms 28 augustus 1997 Profile-driven Instruction Scheduling for TriMedia dr.ir. J. van Eijndhoven (TUE), dr.ir. M. Verhoeven (Philips Research) prof.dr.ing. J.A.G. Jess
Summary:
The TriMedia TM-1000 processor has been developed to deliver real-time multi-media performan\-ce. The TM-1000 is able to process sound, vision and data concurrently en synchronously. To reach this high performance it uses an VLIW (Very Long instruction Word) architecture to exploit ILP (Instruction Level Parallelism), that is parallel execution of a program's machine code. The challenging aspect of VLIWs is instruction scheduling, i.e. reordering operations and packing them into instructions at compile time. The optimizing goal is to produce code that is the fastest executed on average. Scheduling involves the decision of which operation when to execute on what hardware resource. Scheduling is subject to constraints that ensure correct semantics and correct hardware usage. Being a generally intractable problem, practical scheduling is done by heuristics like list scheduling. List scheduling is guided by priorities that identify the order in which operations need to be scheduled in order to fulfill the optimizing goal. The novel scheduling method described in this report uses profile information to partition the set of operations and ranking the sets indicating the order in which these partitions need to be scheduled. Profile information provides information over the execution frequency of operations and is therefore a measure to capture an operation's importance. This project examines the feasibility of applying this theory to decision trees, the scheduling scope of TriMedia. A program is divided into subparts, called the scheduling scope for scheduling convenience. Implementing the method in TriMedia is relatively easy. However, due to the structure of decision trees and the scheduler itself there are some practical implementation issues that need to be identified and be dealt with. The performance of the profile-driven method is comparable with the performance of the current scheduling approach, that is it reaches the same speeds on average. A drawback of the method is the possible explosion of scheduling time. Combined with the minor improvements these two observations lead to the conclusion that application of this method to decision trees is not recommendable and therefore that at this point, it has little use for TriMedia.
-128-
LEERSTOEL ELEKTRISCHE ENERGIESYSTEMEN
-131-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
C. E. Bouwmeester 28 augustus 1997 Synthetische beproevingscircuits voor MS schakelaarontwikkeling; in het bijzonder voor vacuomschakelaars dr.ir. W.F.H. Merck prof.ir. G.C. Damstra
Samenvatting:
Voortdurende ontwikkeling op het gebied van middenspanningsschakelaars vraagt om voortdurende ontwikkeling van circuits om deze vermogensschakelaars te kunnen beproeven. Sinds enkele tientallen jaren wordt bij de beproeving van schakelaars onder andere gebruik gemaakt van synthetische beproevingstechnieken. In dit rapport worden een aantal synthetische beproevingscircuits besproken. Deels zijn dit bestaande circuits, maar oak een aantal nieuwe komt aan bod. Ten eerste wordt vanaf 1900 een historisch overzicht gegeven van bestaande beproevingstechnieken en -circuits. Dit betreft zowel een overzicht van ontwikkelingen ten aanzien van beproeving in het bedrijfsleven als op de faculteit Elektrotechniek van de Technische Universiteit Eindhoven. Vervolgens worden vijf synthetische circuits behandeld te weten een Weii-Dobke circuit, een synthetisch inschakelcircuit, een inductief afschakelcircuit, een driefasig afschakelcircuit en een capacitief afschakelcircuit. De laatste twee zijn daadwerkelijk opgebouwd en getest. Hoewel de twee opgebouwde circuits voor velerlei beproevingen kunnen worden toegepast, is tijdens dit afstudeerwerk onderzocht in hoeverre deze circuits kunnen worden gebruikt voor het uitvoeren van drukmeting aan vacuoumschaklaars in het veld. Beide circuits zijn namelijk compact opgezet en zijn derhalve te transporteren naar onderstations waar de vacuomschakelaars zijn ge"installeerd. Ten aanzien van de drukmetingsproeven kan gesteld worden dat vooruitgang is geboekt ten opzichte van de proefspanningsmethode. Zowel met het driefasig als met het capacitief afschakelcircuit kunnen vacuomschakelaars met een druk hager dan 5·10-3 mbar opgespoord worden. Met de proefspanningsmethode lag die grens nag bij 3 ·1 o-2 mbar. Voor toekomstig onderzok dient het capacitief circuit opgeschaald te worden voor hogere stromen en spanningen (nu nag 12kV bij 200ARMs). Ten tweede kan met de componenten van het capacitief circuit heel eenvoudig het inductief circuit worden opgebouwd. Daarmee kunnen proeven met een precies in te stellen wederkerende spanning worden uitgevoerd. Tot slot kan nag onderzoek verricht worden naar een 'multi-purpose' testcircuit. De verschillende behandelde circuits zijn namelijk opgebouwd uit veelal dezelfde componenten. Derhalve moet het mogelijk zijn om een 'multi-purpose' testcircuit te ontwerpen waarmee verschillende proeven kart na elkaar kunnen worden uitgevoerd.
-132-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
Rapportnr.: EG/97/851 J.G.R. van Dorst 12 juni 1997 Hoogfrequente metingen in het afschakelgebied van een vermogensschakelaar. dr.ir. R.P.P. Smeets prof.ir. G.C. Damstra
Samenvatting:
In het kortsluitlab van KEMA worden voornamelijk vermogensschakelaars voor hoogspanningsnetten getest. Om de klant een nieuwe testdienst aan te kunnen bieden is er een nieuw meetsysteem ontworpen, dat zeer nauwkeurig de stroom en de spanning van een afschakelende vermogensschakelaar meet. Met deze gegevens kan dan voorspeld worden hoe de schakelaar zich in elektriciteitsnetten zal gedragen. Het meetsysteem bestaat uit een spanningsdeler, een door KEMA ontwikkelde stroomsensor, de M-spoel genaamd en als derde een transient-recorder. Tijdens het afstuderen zijn de eerste twee onderdelen op hun hoogfrequente gedrag onderzocht. Het derde onderdeel is bij de afdeling techniek van KEMA gebouwd. Uit simulaties van de spanningsdeler is gebleken wat de invloed van de verschillende parasitaire componenten is. Vervolgens is de deler aangepast aan de nieuwe meetsituatie. Dit leverde ten opzichte van de startsituatie is bij een 1:1000 deling een verbetering op (bandbreedte 3 MHz) in tegenstelling tot de 1:10.000 deler. Om ook hier een verbetering te verkrijgen dient het laagspanningsdeel van de spanningsdeler opnieuw geconstrueerd te worden. Ook voor de M-spoel is een model gemaakt en zijn simulaties uitgevoerd. Hieruit bleek het belang van de aangebrachte dempweerstanden. Vervolgens zijn er ijkmetingen aan zowel een gedempte als een ongedempte M-spoel gedaan. Hieruit bleek dat signalen met een frequentie van 2 MHz, gesampled met 100 MHz goed geconstrueerd konden worden. Als laatste is het totale meetsysteem in de praktijk bij metingen aan een 245 kV vermogensschakelaar getest. Ook hier bleek het systeem zich goed te gedragen. Er is aangetoond dat er direct na de stroomnuldoorgang met behulp van de M-spoel een na-stroom van enkele honderden mA te meten is.
-133-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
Rapportnr: EG/97/862 F.T.J. van Erp 28 augustus 1997 Thyristor uitgevoerde stroombegrenzende gelijkstroomschakelaar. ir. J.G.J. Sloat prof.ir. G.C. Damstra
Samenvatting:
Door het toepassen van grotere vermogens in gelijkspanning- en wisselspanningsinstallaties neemt het kortsluitvermogen toe. De grotere vermogens kunnen ontstaan doordat op de reeds aanwezige installatie een uitbreiding van het aantal voedingspunten plaatsvindt. Een grater kortsluitvermogen in gelijkstroomnetten leidt tot hogere kortsluitstromen met grotere stroomstijlheden. Conventionele gelijkstroomschakelaars reageren te traag waardoor een slechte of geen begrenzing van de kortsluitstroom plaatsvindt. Om de hoge kortsluitstromen te begrenzen zal een snel reagerende schakelaar een uitkomst bieden. Een snel reagerende schakelaar is een hybride schakelaar. De hybride schakelaar combineert mechanische- en halfgeleiderschakeltechnieken. Het afstudeerwerk zal gebruikt worden om het gedrag te voorspellen van een te ontwikkelen hybride gelijkstroomschakelaar met een nominale spanning welke ligt tussen de 750V en 1500V. Het uitgangspunt van het afstudeerwerk is het concept van een te ontwikkelen hybride gelijkstroomschakelaar waarbij de mechanisch schakelende delen zijn vervangen door halfgeleiders. Het aldus ontstane stroombegrenzende gelijkstroomschakelaar is een met halfgeleiders uitgevoerde stroombegrenzende gelijkstroomschakelaar (solid-state DC current limiter). Het uitgangscircuit van de met halfgeleiders uitgevoerde stroombegrenzende gelijkstroomschakelaar is uitgebreid met componenten om de halfgeleiders te beschermen tegen te hoge transiente spanningen. Om de magnetische energie te dissiperen zijn aan het uitgangscircuit componenten toegevoegd. De magnetische energie is opgebouwd in de inductantie van de voedingsbron en de spoelen van de met halfgeleiders uitgevoerde stroombegrenzende gelijkstroomschakelaar. Met een model van stroombegrenzende gelijkstroomschakelaar wordt door middel van negen toestanden het verloop van de stromen en spanningen voorspeld. Een test is uitgevoerd om het gedrag van de geraliseerde stroombegrenzende gelijkstroomschakelaar te verifieren. De met halfgeleiders uitgevoerde stroombegrenzende gelijkstroomschakelaar schakelt in op een kortsluiting. De kortsluiting bevindt zich op de afgaande klemmen van de schakelaar. De bran heeft een nominale spanning van 900V. De ideele kortsluitstroom heeft een stroomstijlheid van 8A/ps. De schaklaar treedt in werking na een vertragingstijd van 100ps vanaf het begin van de kortsluiting. De kortsluitstroom heeft dan een waarde van 670A bereikt. De kortsluitstroom wordt binnen 1.070 ms afgeschakeld en bereikt een topwaarde van 2900A.
-134-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: VF-programma/ Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
J.F.L. van Casteren Rapportnr.: EG/97/852 12 juni 1997 A new stochastic model for sequential and non-sequential calculation of power system reliability indices. Elektrische Energietechniek dr. M.E. Schmieg (IKE, Gomaringen, Duitsland) prof.dr.-ing. H. Rijanto
Summary:
No engineering field has had to wait so long for a mathematical foundation and acceptance from designers and constructors as the field of power system reliability engineering. The need for reliability studies, based on a probabilistic approach was recognised in several publications in the 1930s but the first significant contributions were not to be published until 1947. In the 1950s and 1960s, the fast growing complexity of systems, and the increasing demand for higher reliability in nuclear, electronic and space industries, made reliability engineering more and more a vital tool in designing these systems. However, reliability theory is not simple, and especially the so-called probabilitstic approach asks for intensive calculations and abstract modelling. The power failure in November 1965, when large parts of the North eastern United States and of Eastern Canada suffered a black out for several hours, made the need for better reliability assessment techniques obvious. The last 20 years show a rising interest in the calculation of reliability indices in the design phase and operating life of power systems. Due to the complexity of the problem of assessing such indices, which is theoretical, technical as well as conceptual, the commercial widespread use of reliability assessment programs is still in development. The field of power system reliability engineering is still very fragmented. Many new and often promising assessments methods too often seem to stand alone. At this moment, a wide range of assessment tools are available, from very fast but simplified calculations to somewhat slow but highly accurate analysing methods, and several combined or hybrid methods have been developed. Most of these methods use homogeneous power system models, where all stochastic quantities have negative exponential probability distributions. These homogeneous methods are fast but inflexible. Many real-life aspects of power systems cannot be incorporated. However, non-homogeneous models, using more accurate distributions, are far more difficult when it comes to assessing reliability indices in a reasonable amount of computing time. This is mainly the only available method for those systems is the flexible but slow sequential Monte Carlo simulation. The problem thus is that we either could use a fast method, which is not acceptable because it is too far from reality, or a highly flexible real-life method, which is not acceptable because of its high computational demands. A new stochastic model has been developed which makes it possible to use analytical calculations for the assessment of component parameters and with which sequential as well as non-sequential assessment methods can be implemented with realistic duration distributions. The new model has been named Weibuii-Markov and is a special kind of a semi-Markov model with Weibull distributed stochastic duration's. It is shown that the Weibuii-Markov model can be the base model for a general strategy for the calculation of reliability indices. With the new model a new kind of reliability assessment is possible which analyses the system state space around a certain selected critical system state. This "local state spade scanning" may also be used to analyse dynamic disturbances (transient analysis) during the reliability assessment.
-135-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. Pellis Rapportnr.: EG/97/864 28 augustus 1997 The DC low-voltage house ir. R.W.P. Kerkenaar, ir. P.J.N.M. van de Rijt, ing. K.H.T.J. van Otterdijk (ECN) prof.dr.ing. H. Rijanto
Summary:
The use of photovoltaic (PV) energy in buildings is usually associated with a connection to the public electricity grid. The grid connection requires a conversion from direct current (DC) to alternating current (AC). This conversion enables both the use of standard AC household equipment and a connection to the public electricity grid. Many househould appliances, however, function internally on DC. Within the AC equipment an alternating voltage of about 230 V is transformed to a (low) DC voltage, for example 12 V. Utilising PV energy in this way involves two energy conversions with inherent energy losses. It is reasonable therefore to assume that these losses could be avoided by introducing a DC (low-voltage) grid. The feasibility of 'The DC low-voltage house' set within predifined boundary conditions is the subject of this report. The first part of the research has focused on household energy consumption. It became apparent that DC supply of household appliances is possible, but does not automatically reduce energy losses. The second part of the research concentrates on the DC low-voltage distribution system. It became clear that due to voltage and power losses, it will not be possible to satisfy the present power demand in households with a very low voltage distribution system. The main problems to be overcome in the design of the DC low-voltage distribution system are: switching of DC currents and limitation of short circuit currents. The results of the first two parts of the research lead to conclusions on the feasibility of the DC low-voltgage house. Observing the boundary conditions of the project, a change from AC to DC low-voltage in houses is not very promising. A large reduction of energy losses is not expected. Taking other conditions and circumstances into consideration (for example: a very small power demand, the presence of a public DC electricity grid and the supply of certain types of appliances), may lead to a more positive assessment of the DC low-voltage house.
-136-
LEERSTOEL HOOGSPANNINGSTECHNIEK & ELEKTROMAGNETISCHE COMPATIBILITEIT
-137-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
J.F.T. Pesgens Rapport nr.: EH.97.A.150 16 oktober 1997 Towards an improved measurement of lightning impulse voltages EHC 40 ir. S.M. Benda-Berlijn, dr.ir. J.M. Wetzer prof.dr.ir. P.C.T. van der Laan
Samenvatting:
This thesis report describes the research performed during the first phase and the testing which is done during the second phase of the European Project "Digital measurement of parameters used for lightning impulse test on high voltage equipment" (EC-PL95-121 0). It starts with a general description of a standard lightning impulse and its variations. The standard IEC 60-1 (ed. 1989} is discussed concerning lightning impulse testing and its shortcomings when using digital measuring systems. Based on these shortcomings and a discussion with a transformer manufacturer, a list of parameters is suggested and a questionnaire was made. This questionnaire was sent to several test- and calibration laboratories over the world. The preliminary results are discussed in this report. Furthermore, it was necessary to investigate the literature to see whether sufficient information was available to support the use of alternative parameters when measuring a Ll. The results from this investigation showed this is not the case. It is therefore necessary to investigate the impulse breakdown behaviour of different insulating materials, while varying the (alternative) parameters considered relevant. Whether oscillations and/or overshoot have a significant influence on the breakdown behaviour was also considered interesting to investigate. In order to investigate the breakdown behaviour, a proper procedure and a testcircuit were made. This circuit and procedure are explained in detail. Simulation results for the circuit are given as well. Furthermore, some considerations regarding the shape of the electrodes and test object are given. In order to use as little insulation material as possible, the waveshapes which are to be generated by the testcircuit are first investigated without the actual test object included. The result for this series of tests is given in the last chapter. It shows a reasonable agreement with the simulated testcircuit results. The main conclusion is that the circuit has to be changed in order to get a better performance. A new circuit is proposed, for the remaining time of the test phase. The procedures used to generate and measure a Ll can still be used in this new circuit, as well as the proposed electrode shapes of the testobject and the testobjects themselves. The influence of the different components is the same for both circuits and can therefore also be used. It is recommended to simulate this new circuit with help of a simulation program.
-138-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
Rapport nr.: EH.97.A.149 H.J. Zwier 24 april 1997 EMC bij meerlaagsprintplaten; gevoeligheid voor en generatie van stoorstromen EMC dr. A.P.J. van Oeursen/ir. F.B.M. van Horck prof.dr.ir. P.C.T. van der Laan
Summary:
An electronic circuit is often designed by computer; an analogue or digital simulator then might test the design before realization. Nevertheless, the prototype on the printed circuit board (PCB) often does not meet the design goals: the circuit does not function as it should, and/or it does not comply with the EMC regulations. Redesigns are expensive and time consuming. A set of basic EMC design rules could help the designer and speed up the realization, thus shortening the time to market. These EMC design rules (F. van Horck's future thesis) are based on analytical or numerical calculations, and are tested for several types of multilayer PCB's: 1) with straight or bent traces, but without electronic devices; 2) with continuous ground planes or ground planes with slits; 3) with sinusoidal test signals or with digital logic devices. In EMC terminology the circuits on the PCB are called differential mode (OM) circuits. Two couplings can be distinguished, a) between the OM circuits and b) between the OM circuits and the circuit which comprises the environment. The latter circuit is called the common mode (CM) circuit. Both couplings are described by a transfer impedance ~ and a transfer admittance Y1• The long cables which might be attached to the PCB act as large antennas. The OM current inside the cable is the intended signal. The CM current through the cable is the interference, either received or emitted. In a first approximation one assumes a 150 .n radiation impedance for the cable. The driving source may be the PCB via the OM to CM coupling. The investigations aim to model this source in a simple but accurate way. Both OM-CM and OM-OM-coupling were measured. The measurements on PCB's type 1) and 2) are carried out with a spectrum analyser between 10 Hz and 1.8 GHz. The PCB's of type 3) carry HCT and HLL CMOS logic circuits with clock frequencies between 10 and 100 MHz; we used a 4 GHz digital oscilloscope. Elementary EMC precautions guaranteed that only the desired effects were measured. Automation of the measurements facilitated data retrieval. The measurements have been compared to simulations by F. van Horck. Presently measurements and the special transmission line model agree well up to 600 MHz, for all types of PCB's with a length of 200 mm.
-139-
BUITEN DE FACULTEIT
-141-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.J. Konert 28 augustus 1997 Perceived Affordances in using a trackball. dr. D. Keyson prof.dr. D.G. Bouwhuis
Rapportnr.: IP-1173
Summary:
Research is continuously being conducted towards improving human-computer interaction. Input devices like mice and trackballs play a major role in interaction with graphical user-interfaces. A user-interface may become more natural and more intuitive if the perceived affordances of an input device are similar to interacting with physical objects in the real world. Since many people have experienced ball characteristics, one could attempt to implement the feedback of the rolling movement of a real ball in a trackball. The current study is focused on the capability of users to recognize the movement characteristics of the trackball as a real ball. Towards solving this proboem, the current work consists of several steps: - Studying the characteristics of real balls set into a rolling motion by subjects. - Implement these ball rolling characteristics in trackball control in order to simulate both visual and tactual feedback of the rolling movement. - Determine the quality of the simulation compared to rolling a real ball in terms of ball rolling accuracy. - Examine the role of visual feedback in rolling a real ball in comparison with a trackball. In the implementation both tactual and visual feedback of the ball rolling movement are simulated. The idea is based on giving the user the sensation that the ball is freely rolling, although in reality the actual trackball only rotates during a short interval. Subject performance in terms of ball rolling accuracy was found to be similar in both the real ball and trackball experiment for shorter distances given visual feedback. However, ball rolling accuracy in the trackball study was found to be lower given no visual feedback as compared to the realworld study. Therefore, it can be concluded: -Tactual feedback plays a major role in the perception of the user of how to roll a ball. However, simulated visual feedback can compensate for the lack of tactual feedback. - Real ball rolling does not produce sufficiently accurate target acquisition. -The simulation of the tactual feedback fell short of reality. In order to use a rolling ball model in user-interface design the simulated feedback should be further improved and should lead to even better performance than rolling of real balls. Therefore, the following recommendations are made: In order to make the trackball more similar to reality, the resistive force of the trackball as subjectively experienced by the user may be reduced. For the same reason the force feedback to the user should be improved. It may also be an option to allow the user to correct ball movements during rolling and focus on the aspect of rolling-direction instead of rolled distance. Furthermore, the ball rolling model developed can be used for an experimental study to examine the benefits of dynamically activated force fields over targets based on prediction of endpoint using the ball rolling model.
-142-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.C. Willemsen 28 augustus 1997 Subjective evaluation of JPEG-coded images: Quality versus impairment ratings. dr. H. de Ridder, dr.ir. A.C. den Brinker prof. A. Houtsma (IPO, visuele groep)
Summary:
In this report, three experiments are described related to judgements of JPEG-coded images. The JPEG (Joint Photographic Experts Group) algorithm is a frequently used means of compressing still images. The quantization used in this algorithm causes artefacts in the images that are coded. Experiment 1 consisted of a numerical category scaling experiment, in which subjects had to rate the strength of each of the three artefacts, blockiness, ringing and blur on a scale from 0 to 10. Subjects also had to judge the image quality on a numerical category scale. In experiment 2 subjects had to rate how much each artefact contributed to the overall impairment of an image on a percentage scale. Results of experiment 1 and 2 were combined. Results show the strength of each artefact as a function of the quantization of the coded image. Results also suggest that there is a difference in weighing of dimensions in quality and impairment judgements, resulting in a non-linear relation between quality and impairment ratings. A theoretical foundation was given to this finding, based on theories from cognitive psychology about similarity, categorization and decision making. A quality judgement is said to be based on the most prominent dimension, whereas an impairment judgement is based on a weighed judgement of all dimensions in the set of stimuli. Experiment 3 proved this hypothesis to be valid, but only in the case of a double stimulus experimental approach in which subjects were able to make a side by side comparison. In the case of a single stimulus method, impairment and quality judgements did not differ much. In experiment 3 normal JPEG-coded inmages were used and JPEG-coded images that contained only blockiness artefacts. These images were manipulated via the quantization table of the JPEG algorithm. This report shows that there are differences between quality and impairment judgements, based on the weight given to the dinmensions in the stimulus set. Whether these differences are revealed, depends on the experimental method used.
-143-
SAMENVATTINGEN AFSTUDEERVERSLAGEN FACULTEIT ELEKTROTECHNIEK
1998
De Technische Universiteit Eindhoven aanvaardt geen aansprakelijkheid voor de inhoud van de in deze bundel opgenomen samenvattingen van afstudeerverslagen.
INHOUD
CAPACITEITSGROEP TELECOMMUNICATIE TECHNOLOGIE & ELEKTROMAGNETISME Leerstoel Telecommunicatie ............................................................................................5 Leerstoel Elektronische Bouwstenen .............................................................................21 Leerstoel Elektromagnetisme .........................................................................................23
CAPACITEITSGROEP MEET & BESTURINGSSYSTEMEN Leerstoel Leerstoel Leerstoel Leerstoel
Meten en Regelen ..........................................................................................29 Signaalverwerking .......................................................................................... .45 Medische Elektrotechniek .................. ,............................................................ 53 Elektromechanica & Vermogenselektronica .................................................. 59
CAPACITEITSGROEP INFORMATIE & COMMUNICATIESYSTEMEN Leerstoel Digitale lnformatiesystemen ...........................................................................69 Leerstoel Ontwerpkunde voor Elektronische Systemen ................................................ 83 Leerstoel Elektronische Schakelingen ............................................................................ 91
CAPACITEITSGROEP ELEKTRISCHE ENERGIETECHNIEK Leerstoel Elektrische Energietechniek .......................................................................... 101 Leerstoel Hoogspanningstechniek & Elektromagnetische Compatibiliteit.. .................. 107
-1-
CAPACITEITSGROEP TELECOMMUNICATIE TECHNOLOGIE & ELEKTROMAGNETISME
-3-
Leerstoel Telecommunicatie
-5-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. Bastiaans 27 augustus 1998 Tag-ClW: Tekstcompressie m.b.v. context-tree weighting en woordinformatie. dr.ir. F.M.J. Willems prof.dr.ir. G. Brussaard
Samenvatting: In 1995 is door Shtarkov, Tjalkens en Willems een universeel datacompressiealgoritme gepresenteerd. Dit algoritme is deels door de auteurs en deels door Volf aangepast om gebruikt te kunnen worden voor tekstcompressie. Om de compressie te verbeteren, is het nodig om meer van de bran te weten. In het geval van tekstuele informatie is er als extra informatie de grammatica van de taal waarin de tekst is geschreven. De grammatica bepaalt de woordvolgorde en de woordverbuigingen. Deze extra informatie, in de vorm van woordsoorten {tags}, kan ten goede komen aan de compressie van de tekst. Teahan heeft in 1996 een methode gepresenteerd om met behulp van woordsoortinformatie een betere compressie te realiseren. Hierbij worden woorden voorspeld aan de hand van de huidige tag en het voorgaande woord. Tags worden voorspeld aan de hand van het voorgaande woord met de bijbehorende tag en de tag daarvoor. Om gebruik te kunnen maken van woordsoorten, moeten deze worden toegekend aan de woorden. In 1992 heeft Brill een tagger gepresenteerd, die op basis van een lexicon en ongeveer 300 eenvoudige regels aan elk woord in een tekst een tag toekent met een zekerheid van meer dan 97%. Een voordeel van deze tagger is dat deze eenvoudig te trainen is voor een andere taal dan de engelse. Het doel van dit onderzoek is het aanpassen van het ClW-algoritme, zodanig dat het door gebruik te maken van woordsoorten een betere compressie realiseert dan het "gewone" ClW-algoritme op dezelfde tekst zonder woordsoorten. Er is gebleken dat bij het aangepaste ClW-algoritme de tekstsymbolen het beste voorspeld worden aan de hand van de huidige tag, de vijf voorgaande symbolen en de voorgaande tag. Een tag wordt het beste voorspeld aan de hand van de voorgaande tag, de 7, 8 of 10 voorgaande symbolen en de tag voor de voorgaande tag. Er zijn twee soorten simulaties gedaan. De eerste variant comprimeert de hele tekst, inclusief formattering e.d .. Hiermee wordt ten opzichte van een "gewoon" CTW-algoritme met contextdiepte 7 {ClW-7) gemiddeld een winst behaald van 0.2%. Ten opzichte van een "gewoon" ClW-algoritme met diepte 5 {ClW-5) wordt gemiddeld 4.2% winst geboekt. De tweede variant converteert aile hoofdletters naar kleine letters en negeert aile symbolen die geen letter, spatie, punt of komma zijn. Hierdoor wordt een hele regelmatige structuur in de tekst verkregen. Ten opzichte van CTW-7 wordt gemiddeld 1.0% winst gerealiseerd, ten opzichte van CTW-5 zelfs 5.7%. De behaalde winsten zijn echter kostbaar: Het geheugengebruik is tot tien keer grater dan dat van het "gewone" CTW-algoritme met context diepte 5. Ten opzichte van het ClW-algoritme met diepte 7 is het geheugengebruik tot 2.5 maal zo groat.
-6-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.J.C. Besselink 27 augustus 1998 Implementation and testing of an improved echo canceller and an ADPCM speech coder. K. Elmalki, Ph.D., ing. A. Tognoni prof.dr.ir. G. Brussaard
Summary:
The Master's Thesis was carried out at an Ericsson site in Rome. The work consisted of three different parts. The first was the setting up of a test environment in a laboratory for the evaluation of speech compression and echo cancellation algorithms. For this, a test environment had to be set up that resembles the situation found in a modern PSTN. The Thesis describes this test environment as well as the test requirements. The second part of the activities involved the testing of an improved time-domain echo canceller that was implemented by Ericsson. Several novelties were used to improve the performance and reduce the computational load of a DSP, like Voice Activity Detection and 'Echo path optimisation'. The lowlevel code had to be downloaded on a DSP-platform, debugged and tested. A digital delay had to be implemented to account for transmission times with long-distance telephoning. Several LMS-based algorithms were evaluated. After many hardware and software problems were solved in the lab, tests were conducted revealing that an excellent result was achievable with the echo canceller. Both novel features proved working, after modifications were made. With speech the algorithm converged perfectly. Tones, however, often caused the DSP to block completely. The Thesis reveals how this error may be solved, and additional recommendations were written down to further improve the implemented algorithm. Up to mean oneway transmission times of about 80 ms no NLP is required. Of the various LMS-based algorithms the NLMS adaptive filter worked best. To be able to test ADPCM, an efficient low-level had to be developed for the TMS320C542 DSP using fixed-point arithmetic. Floating-point and fixed-point programs were developed in C language from which an efficient program in assembler could be derived. First, this microcode was checked using a Device Simulator. After this proved working minor tests were carried out in the laboratory with tones and speech as input signal. No problems were encountered. From this point, Ericsson will take care of the desired tests. Additionally, simulations were done using the Encoder and Decoder programs written in C language. Five different tests were carried out with fixed-point and floating-point variables, a comparison between the ITU Recommendations G.721 and G.726 of ADPCM, and allowing +0 to be transmitted by the Encoder (this is different than recommended). From these simulations, using 6 different speech files and up to 10 synchronous tandem codings, it was found that the codec using floating-point variables yields an increased subjective speech quality over a codec using fixed-point variables, though not spectacularly. The older ITU standard G.721 provided for a higher speech quality than the newer G.726 standard. The largest effect was obtained by allowing the +0 to be transmitted by the Encoder. The speech material still was fairly intelligible even after 10 synchronous tandem codings. Benchmark values were obtained from the microcode showing that about 8 channels could be either encoded or decoded simultaneously on this DSP. The ADPCM speech codec will consume 1216 words of program memory. For each channel48 words of data memory should be reserved.
-7-
-----------------------------------------------------------------
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
--
M.A. Bongaerts 27 augustus 1998 The influence of the parameters of a lightning impulse on the breakdown behaviour of insulating materials K. Elmalki, Ph.D., ing. A Tognoni prof.dr.ir. G. Brussaard
Summary:
Since the introduction of digital measuring equipment there has been much discussion about the determination and calculation of lightning impulse (LI) parameters, as described in IEC 60-1. The problem is that the rules for evaluation of the Ll-waveforms are not unambiguous when using fast stateof-the-art digital oscilloscopes and digital evaluation. Further it is appropriate to investigate whether or not the classical parameter set is the most indicative for the behaviour of insulating materials, because this is very questionable. For example: Does an oscillation have influence on the breakdown behaviour or is it allowed that an oscillation exists on the lightning impulse? A European project has been started to investigate the influence of different parameters characterizing a lightning impulse. By generating different lightning impulses with different parameters the breakdown behaviour of oil is investigated. This report handles about a part of this project. A test circuit to generate the different waveforms is designed, simulated and built. A literature study has been done about the different methods to evaluate the different parameters. A measuring system to measure the oscillations and the overshoot is developed, built and tested. All the desired waveforms can be generated. There are two different circuits available to generate the different oscillations on the standard lightning impulse. A specific measuring system is developed for this project: a differentiating-integrating system. This measuring system is designed, built and tested. It appeared that the calculated qualities of the system are correct in practice. So the differentiating-integrating system can be used.
-8-
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
A.A. Goedhart 27 augustus 1998 Microstrip mixer and antenna design for a 14 GHz FMCW radar
system Begeleid ing: Afstudeerhoogleraar:
prof.dr.ir. G. Brussaard prof.dr.ir. G. Brussaard
Summarv: This Masters thesis gives the results of a research project of six months, carried out in a collaborative arrangement between a British company, Ogden Safety Systems Ltd., and the University of Bradford (UK). Ogden Safety Systems Ltd. manufacture FMCW radar systems that are used on the rear of large vehicles on building sites to prevent collisions while reversing. A new market for the system is the use inside tanks and silos to measure the level of liquids, powders, etc. in industrial environments. The most important aim of the project was to make the radar smaller. The approach chosen to achieve this, was to make part of the microwave assembly in microstrip. It was also hoped to reduce the level of background signal arising in the simple existing mixer design, which reduces the sensitivity to closein reflections. It was decided to make a single balanced mixer in microstrip to replace the single diode mixer that is currently used. Low cost surface mounted diodes (Alpha SMS7621-006) were used as detectors. Two types of hybrid couplers are analysed in this report. Implementations of both types were designed, etched and measured. Another goal was to improve the radar front end in order to make the system suitable for use inside tanks and silos. An antenna with a low sidelobe level is desired for this application. An 8-patch uniform array was designed and tested. A parallel feeding network of 2-way power splitters is used to feed the array. Matching transformers are included in the feed network to match the input impedance of the patches to the impedance of the feed lines. A microstrip-to-waveguide transition was made to connect the waveguide parts of the radar system to the microstrip parts. This transition uses a stepped quarter wavelength transformer designed according to the Chebyshev distribution. A good connection between the ground plane of the microstrip substrate and the waveguide is very important. It was found that the performance of the transition depended on how far the dielectric was inserted under the last step of the transformer. This effect is not mentioned in the papers on this subject. The components designed and tested in this project have not yet been integrated in a complete circuit. However, a proposal for a complete front-end containing the developed components is given and can be used as a start for further work.
-9-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoeksthema: Begeleiding:
P.H. Trommelen 12 februari 1998 Algorithms for determining water vapour and liquid water contents of the atmoshpere using radiometer and meteorological data. Radiometrie drs. S. Jongen, dr.ir. M. Herben, prof.dr.ir. G. Brussaard
Summary The objective of the research on microwave radiometry at the Radiocommunications Group of the Eindhoven University of Technology {EUT) is to find a reliable and cost-effective way to determine water vapour and liquid water contents of the atmosphere. Applications for this kind of measurements are within the design of satellite communication systems, meteorological and environmental research of the atmosphere and clouds in particular. The remote sensing technique described is based on the measurement of the brightness temperatures of the atmosphere at two different frequencies with a radiometer. The frequencies used are 21.3 GHz, at which there is an absorption peak due to water vapour and 31.7 GHz where the window is most sensitive to liquid water. From these measurements the integrated liquid water (L) and integrated water vapour (V) of the atmosphere can be determined. In the most simple retrieval algoritms Vand L are estimated by a linear combination of the atmospheric brightness temperatures. In the 'Matched Atmoshpere' algorithm, which was developed at the EUT the temperature, air pressure and relative humidity profiles of a standard atmosphere are used. Cloud base height and a relative humidity reference level are parameters to be varied until the brightness temperatures calculated from the atmosphere model match the measured temperatures as closely as possible. If available the cloud base height data from lidar or radar are used as input variable and the cloud top height is used as tuning parameter. Having found the best match, V and L can be calculated from this atmosphere model. By using real-time meteorological data more accurate results are expected. Within the Clouds And Radiation (CLARA) research project a lot of data on the atmosphere and clouds in particular as gathered by KNMI, TU-Delft, RIVM, ECN and TNO-FEL using radar, lidar, infrared radiometer and radiosondes. EUT participated in this project with radiometer measurements. Data analyses showed that all algorithms give an increase in L whenever there are clouds. The curves of V and L retrieval obtained with the different algorithms have all about the same shape but differ in level and range. All algorithms give small negative values for the retrieved amount of liquid water, which of course is physically impossible. Reasons for the obtained differences and the negative amounts of L with the linear algorithms could be found in the fact that the parameters used in these algorithms are badly tuned for the specific test-site and that the assumption of the linear relation between attenuations and V and L might not hold in clear sky conditions. For the Matched Atmosphere algorithm negative L values could be caused by differences in the estimated measurements, inaccuracy in the reference values and incaccuracy introduced by linear interpolation that is used in the algorithm. Which of the previously mentioned reasons is responsible for the deviations is hard to say, probably it is a combination of all. Using radiosonde data, reference values for V and L are defined and used to verify the results obtained with the different algorithms. When comparing the results of the Matched Atmosphere Algorithm with these reference values it appears that there are deviations in Vof up to 7.5 mm, with a rms difference 3.21 mm for the first Clara campaign and 1.22 mm for the second. Furthermore it was noted that also the Matched Atmosphere Algoritm gives negative amounts of liquid water during periods of clear sky.
-10-
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Begeleiding:
R.A. Werkhoven 27 augustus 1998 On the performance of OFDM in combination with preset equalisation dr.ir. P.F.M. Smulders prof.dr.ir. G. Brussaard
Summary: The ability to communicate with people on the move has evolved remarkably during the past ten years. The mobile radio communications industry has grown by orders of magnitude fuelled by digital and RF circuit fabrication improvements, new large-scale circuit integration, and other miniaturisation technologies that make portable radio equipment smaller, cheaper, and more reliable. These trends will continue at an even greater speed during the next decade. The expectation is that in the near future the demand for broadband wireless multimedia services (data, voice, video) will increase the need for a Broadband Integrated Services Digital Network (BISON). The first step to wireless multimedia services is to develop a wireless local area network (wireless LAN) that will offer a larger mobility than cable or fibre based systems. Within the European program ACTS (Advanced Communication Technologies and Services), the MEDIAN consortium has started the evaluation of wireless LANs which are able to support multimedia applications. The main goal of this project is to implement a pilot system incorporating a wireless LAN configuration. This configuration offering an aggregate user capacity of 150 Mbit/s. Only if the portable station is within the coverage area, reliable communication that support multimedia applications should be guaranteed. A disadvantage of radio channels is that movements and the multipath effect limit the achievable bit rate of a digital communication system. Simulations are therefore necessary to gain insight in the trade-off between complexity and feasible data rates. The objective of this report is to examine the performance of emerging radio transmission techniques that enable data rates up to 200 Mbit/s over 60 GHz indoor radio channels. The underlying study is based on a set of 22 complex channel impulse responses previously measured in the Reception Room of the Eindhoven University of Technology. From these responses it occurs that, without sophisticated counter measures the quoted target bit rates are not feasible due to the frequency selectivity of the channel. To transform the frequency selective channel into a frequency non-selective channel (flat fading channel), Orthogonal Frequency Division Multiplexing (OFDM), a 2-tapped MeanSquared Error (MSE) equaliser, and preset equalisation have been studied. Simulation results show that OFDM in combination preset equalisation and channel coding guarantees a reliable Bit Error Rate 8 of 1o· at 200 Mbit/s in 60%-70% of the positions in the Reception Room. Keywords:
indoor radio channel/ OFDM/ MSE minimisation/ ZF equaliser.
-11-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.R.L.C. Ariens 27 augustus 1998 Study on the transportation of management signals in a WDM network. dr.ir. De Waardt, ir. J.C. v.d. Plaats prof.ir. G.D. Khoe
Summary:
This summary discusses the principles of different options for the transportation of management signals in an optical WDM network. By using the principles of Frequency Division Multiplexing (FDM) and low power transmission of the management signal, the data and management signals can be separated at the receiver at the cost of a small receiver sensitivity penalty. During this project we worked with a data capacity of approximately 2.5 GbiUs (8 * 622 MbiUs) and a capacity of the management signals of 10 kbiUs. •
Network layout
Before we mention the basic principles of our theory, we take a closer look at the lightwave system of the TOBASCO project. From the report we see that the model consists out of a laser, fiber, amplifier, optical filter and two receivers. To reduce the complexity, the theory is based upon a lightwave system with one data and one management signal operating at the same wavelength. •
Principles of combining and separating signals
When we multiplex both management and data signal as in Error! Reference source not found. (see the report) we have to look at both spectra (see Error! Reference source not found.). Here we can conclude that the spectrum of the data signal is overlapping the spectrum of the management signal. Nevertheless the power is divided over a larger frequency domain. The disturbance of the data on the management signal can be seen as a white noise source with a small bandwidth. So by the use of filters and low power transmission of the management signal both signals can be recovered with a small loss in sensitivity of the receiver. • Performance of data and management signals The performance of the data and management signal are given by the BER (Bit Error Rate). Taken all noise sources and distortion from the management signal on the data signal and the other way around into account we see in Error! Reference source not found., that we are able to combine both signals with the penalty of a small loss of the sensitivity of the receiver . • Transceiver design To be able to verify our theory a RS232 transceiver was designed. Error! Reference source not found. shows us the building block of the RS232 receiver. A normal transimpedance amplifier was slightly modified for our purposes. To be able to filter the Decomponent of the optical signal filters were added. The linear channel consisted out of filters and rectifiers to be able to restore the management 10 kbiUs (we were using an ASK modulation scheme for the management signal to overcome several difficulties (see report)). The data recovery section was facing some difficulties, because the RS232 management signal is a burst mode signal. With the use of a feedback system we were able to decide correctly whether received an optical 0 or 1. • Testing & measuring The difference between the theory and measurements was declared, but goes beyond the objective of this summary. Taken this into account we saw the BER-curves, we were expecting (see Error! Reference source not found.). • Conclusions We discussed a model in which management and data signals are multiplexed in one fiber. As a result of the coupling of these signals the noise sources increases, which leads to a lower but acceptable sensitivity of the receiver.
-12-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.P.H. v.d. Bergh 15 oktober 1998 WDM-monitoring. dr.ir. H. De Waardt, dr. H.J.S. Dorren, dr. J.J.G.M. v.d. Tol prof.ir. G.D. Khoe
Summary: Mijn afstudeeropdracht heb ik uitgevoerd bij KPN Research in Leidschendam, hier ben ik afgestudeerd bij de afdeling Signaal Transport Systemen. Binnen deze groep heb ik aan het BOLERO-project gewerkt. BOLERO staat voor: Beheer Optisch Laag ExpeRimenteel Onderzoek. BOLERO onderzoekt de management zaken van een optisch Wavelength Division Multiplexing (WDM) ring-netwerk. Het uiteindelijk doel van het BOLERO-project is om een klein optisch ring netwerk te realiseren waar de routering van zender naar ontvanger geheel optisch, via golflengte multiplexers, demultiplexers en schakelaars verloopt. Er zullen op bepaalde plaatsen in het netwerk monitorfuncties worden opgenomen, waarmee zoveel mogelijk fouten die kunnen optreden in een optisch netwerk waarneembaar zullen zijn. Mijn aandeel in dit project bestond uit het ontwerpen van een goedkoop monitor systeem voor het BOLERO-netwerk. Dit systeem moest de kwaliteitsparameters van een aantal WDM-kanalen in een optisch signaal testen. Gedurende mijn afstudeerperiode heb ik hiervoor een nieuwe methode ontwikkeld. Deze methode maakt gebruik van een simpele optische component, een golflengteafhankelijke verzwakker, en slimme signaalbewerkingstechnieken. Aan de hand hiervan heb ik een implementatie van deze methode in een monitor systeem voor het BOLERO-netwerk gerealiseerd. Deze implementatie is daarna met behulp van experimenten geverifieerd.
-13-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
V. Grundlehner 23 april 1998 Using Dense Wavelength Division Muliplexing in Optical Trtansport Systems. ir. De Waardt, dr. Perny prof.ir. G.D. Khoe
Summary: This report describes the results of a research study on DenseWavelength Division Multiplexing (DWDM) in point-to-point optical transport systems. The goal of this study was to acquire general experience on DWDM optical transmission systems and to obtain an impression of the specific problems and solutions that are typical of DWDM systems. A DWDM system had to be build and the performance of the different types of external modulators in this system had to be determined. Furthermore, the influence of linear optical crosstalk on the system performance had to be characterized. To achieve these goals, a number of topics were addressed during this study: First of all, it was concluded that measuring a BER curve is an accurate way of examining the system performance. For this reason, this report discusses a model with which a theoretical BER curve can be derived. This model is in good agreement with a BER curve that was measured using a back-to-back setup. With this setup, a value of -30.5 dBm was found for the receiver sensitivity at a BER of 10-9. Furthermore, two effects that limit the performance of DWDM systems were identified. The first, spectral broadening, is the result of the applied modulation technique and of the dispersive nature of standard optical fiber. A discussion of the consequences of spectral broadening caused by fiber dispersion, showed that a signal modulated at 2.5 GbiUs, that is traveling through optical fiber having a dispersion coefficient of 17 ps nm-1 km-1, should be regenerated after 147 kilometers. When direct modulation is used, this regeneration distance reduces to 115 kilometers. The second performance limiting effect that was studied is linear optical crosstalk in the demultiplexers. In this report, a model is given with which the consequences of this effect on the system performance can be calculated. Applying this model to a DWDM system having two channels with a crosstalk level of 9.2 dB and modulation indexes equal to one, shows an increase in BER with a factor 90 from an initial BER value of 10-13. Furthermore, it is shown that in a system with system parameters as mentioned before, a worst case crosstalk level of 9.2 dB causes a power penalty of 1.1 dB at a BER level of 10-9. This report also identifies many of the components that are used in DWDM systems and studies their requirements. Special attention is given to the laser source, the external modulators and the optical channel demultiplexers. DFB lasers and FP lasers with external cavities were used in the DWDM setup. These lasers had respective SMSRs of approximately 41 and 56 dB. Both sources were tunable and had respective linewidths of approximately 30 Mhz and 100 kHz. Furthermore, measurement techniques with which the laser linewidth can be determined with a high accuracy were described. The setup using the FPI to measure the linewidth of an unmodulated laser, produced the best results and a linewidth of 9.7 MHz was measured with this device. In the DWDM setup, EA and MZI modulators were used. These external modulators provided respective extinction ratios of approximately 5 and 8 dB when applying a modulation voltage of 552 mVtop-top. In this report, it is shown that the spectral broadening caused by the applied modulation technique, can be characterized by the a factor. For the MZI modulator, this a factor can be as low as zero. Furthermore, BER measurements were made with setups in which these modulators were present. These measurements showed that the EA modulators caused a non-linear increase in the BER. The MZI modulator performed better and a receiver sensitivity of -28.4 dBm at a BER level of 10-9, was measured. Special attention was also given to the demultiplexers. The available demultiplexers consisted of cascaded filters having a channel distance of 1.6 nm. In this report, a definition of the crosstalk levels in these demultiplexers is given and measurements of the crosstalk levels are discussed. Characterization of the demultiplexers showed a worst case compound crosstalk level of 9.2 dB. Measurements showed that such a crosstalk level results in a power penalty of 0.33 dB at a BER level of 10-9. Furthermore, due to crosstalk, at a BER of 1.4 10-10 a BER increase with a factor of 71 was measured. -14-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. van der Heijden 11juni1998 Analog correlator for high speed DS-CDMA modem. dr. v.d. Boom, ir. Kennis prof.ir. G.D. Khoe
Summary For upstream communication in CATV networks (communication from subscribers to head-end) COMA seems to be the best multiple-access technique in comparison with TDMA and FDMA. The greatest advantage of COMA is its robustness against narrowband interference and impulse noise. This makes the COMA technique suitable for CATV networks where especially the lower frequency band is corrupted by narrow band interference. Disadvantage of COMA is its complex hardware. An all digital transmitter and receiver are already realized for the upstream communication. This digital transmitter-receiver combination operates properly. An advantage of an all digital modem is its easy implementation. Almost all hardware is fitted into programmable devices. However, the received signal has to be converted from analog to digital domain. This requires a very high speed AID-converter. At this moment one of the fastest AID-converter available is used. If we want to speed up the bitrate of the data, we have to look for another approach. It was proposed to use an analog correlator, the key part of the receiver, and make the analog to digital conversion in a later stage. After the correlator the received signal is despreaded and the AIDconverter can operate at a lower clock frequency. An analog correlator circuit was developed in discrete hardware and measurements were carried out. The measurements has shown that the analog correlator can speed up the data rate by at least a factor of ten compared to the digital modem. Measurements of correlation functions with the analog correlator approximate the theoretical correlation functions very well. Also the correlator has been tested in a multi-user system where an arbitrary waveform generator has generated a 25 Mchips/s COMA signal which corresponds to a bitrate of 400 kbit/s for each user. The analog correlator performs well if the number of users doesn't exceed 80. This was the maximum number of users that could be achieved during the period of research but a slight improvement in the synchronization can lead to better results.
-15-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.J.L. Hoppenbrouwers 15 oktober 1998 An experimental optical neuron. dr.ir. H. de Waardt, ir. E. C. Mos. dr. J.J.H.B. Schleipen (Philips) prof.ir. G.D. Khoe
Summary This thesis describes the experimental verification of a concept, named the injection seeding neuron. The conducted work is part of the Laser Neural Network project. The goal of this project is the realisation of a neural network in the optical domain by use of diode lasers. The injection seeding neuron is a concept of a fully optical neuron. Both the inputs and the output of this neuron are defined by optical powers. In realising such a neuron, one of the requirements is a nonlinear function. In the injection seeding neuron, this function is realised by the injection of light into a semiconductor laser diode. Two longitudinal modes of the laser diode are used, with one above and one below threshold. The nonlinear function is obtained by increasing the injected power in the below-threshold. As a result, the below-threshold mode can start laser operation. A crucial technique in the injection seeding neuron is injection locking or injection seeding. This technique is theoretically investigated. The experimental setup consists of two parts. The main component of both parts is a laser diode with an external cavity. A tunable laser provides the input signal to the injection seeding neuron. The output wavelength of this tunable laser is very stable in order to achieve injection locking. The actual injection seeding neuron is formed by the so-called neuron laser. In the external cavity of this neuron laser, the applied feedback can be controlled for several longitudinal modes by use of a grating and a liquid crystal display. The experiments succesfully verify the concept of the injection seeding neuron. If the laser diode has a low pumping current, the measured nonlinear functions are in agreement with previously performed simulations. If the current is increased, different nonlinear functions are found. A variation of the injected frequency also has an effect on the shape of the nonlinearity.Using the same setup, a frequency converter is demonstrated. It was difficult to perform multiple, reproducible measurements due to instabilities in the setup. Therefore, it is recommended that the same experiments be carried out with an improved setup.
-16-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoeksthema: Begeleiding: Afstudeerhoogleraar:
J.M.B.M. Kennis 12 februari 1998 A high bitrate direct-detection DPSK optical transmission system. High speed trunc transmission dr.ir. H. de Waardt, ir. J.G.L. Jennen prof.ir. G.D. Khoe
Summary: For high speed data transmission over an optical fibre both the 1500 nm and the 1300 nm regions can be used. In the 1300 nm region the polaristation insensitive Semiconductor Optical Amplifier (SOA) is available to optically amplify the signal. These amplifiers have a low saturation input power. Due to saturation, pattern effects occur when using intensity modulation. The pattern effect causes a closure of the eye pattern resulting in a low BER. A solution to this problem is decreasing the signal power so saturation will not occur. This will decrease the achievable length of the link because the signal to noise ratio will become too low. A second solution is making use of Differential Phase Shift Keying (DPSK) or Frequency Shift Keying (FSK) modulation. Due to the constant evenlope of the modulated signals the saturation of the SOA's does not cause the pattern effect. The DPSK seems to be the best choice because it can be easily modulated using a LiNBo3 phase modulator (commercially available in speeds up to 40 Gbit/s) and has a high receiver sensitivity. A new problem introduced when using DPSK is the phase noise which influences the data stored in the phase. The advangage of Differential PSK is that only the phase change during two bits is of interest. The effect of phase noise is expressed in the factor Av • T. Where Avis the Full Width at Half Maximum (FWHM), also called linewidth, of the optical spectrum of an unmodulated signal at the input of the demodulator. The factor Tis the bit time. To be able to estimate the desired value of this factor a simulation program is written to calculate the BER-curve for different values of Av • T. Since error probability has a Poisson distribution the saddle point approximation method is used to calculate the BER. The demands of Av • T, found as well in literature as by the simulations, are Av • T 0.01 for a balanced receiver and Av • T 0.005 for an unbalanced receiver. Interesting to see is when the bit rate becomes larger the demands on the linewidth decrease. The demands of Av • T can easily be met for links with up to 20 amplifiers ("" 1000 km).
=
=
Another important noise factor is the Amplified Spontaneous Emission (ASE) -noise, which is a broadband noise caused by spontaneous emission in the SOA's. To describe its effect a simulation program was written, calculating the BER-curve for a different number of amplifiers and for different optical input powers of the link. The simulation shows, that the best way to deal with the ASE-noise is to increase the signal power so the signal to noise ratio will stay high enough. A link of more than 20 SOA's can be reached. Both the phase-noise and ASE-noise simulations predict a link of 20 amplifiers can be reached. To study what happens when both noises are combined, a simulation program was written which calculates the BER of the entire system. The result of this simulation confirms it is, at least theoretical, possible to make links with lengths up to 1000 km using 20 segments of 50 km fibre and a SOA with an average gain of 20 dB.
-17-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoeksthema: Begeleiding: Afstudeerhoogleraar:
F.M. Ploumen 12 februari 1998 Medium Access Control Protocols for ATM based Passive Optical Networks .. Local Networks (eco-30) ir. H.P.A. v.d. Boom, ir. R. Hoebeke (Aicatell) en ir. K. Venken (Aicatel) prof.ir. G.D. Khoe
Summary: The main function of a shared medium access network is to multiplex the upstream traffic from several customers and offer it to the core network. In the context of networked applications over ATM, this involves the multiplexing of ATM connections. The end-to-end performance of a broadband sharedmedium access network will be determined by the combination of the adopted multiplexing policy and the behaviour of network elements assuming to quality of service policy. This policy should be optimized for end-to-end d elays which are as constant as possible, and maximum throughput. Clearly, in the case of broadband shared-medium access networks, the (shared) Medium Access Control protocol will be one of the determining factors for the end-to-end performance. In this thesis a general simulation model for the performance evaluation of MAC protocols was developed with the software simulation tool OPNET. After developing this model, several MAC protocols were studied and new proposals were made. An essential choice for the performance of the shared medium access control mechanism is the question whether the MAC protocol is based on a static of dynamic mechanism. In contrast with the static protocols the dynamic protocols can react on traffic fluctuations. They are also more robust for changing loads or burstiness. Analyzing the simulation results with the passive optical access network, it is concluded that all dynamic MAC protocols perform quite similar. Although more complex grant generation mechanisms might be able to outperform the others for certain scenario's, it is on this moment not possible to identify a MAC protocol which is the best performing over a wide range of traffic scenario's . Especially mechanisms used to reduce cell delay variation, like spacing, should be dimensioned very conservative since thay can disturb the throughput under heavy loads. Therefore it must be concluded that the performance of a MAC protocol should be analyzed with several, well chosen traffic scenarios.
-18-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R. Tascioglu 12 februari 1998 OS-COMA system performance improvements through optimal and adaptive signal processing. ir. H.P.A. v.d. Boom, ir. F.J.J. Kennis prof.ir. G. D. Khoe
Summary: The Direct Sequence-Code Division Multiple Access {OS-COMA) system has been found a proper communication scheme in a Community Antenna Television {CATV)-network environment to perform multi-user communication in the presence of narrowband noise. However, it can sometimes happen that the processing gain is not large enough. A suggested solution can be the increasing of the signalto-noise (Gaussian) ratio. This increasing can saturate the inputs of the TV-receivers in the CATVnetwork which isn't of course desired. Because of the limited bandwidth of the system (5-30 MHz) we can't increase the processing gain of the system. The conclusion is that narrowband interfering components will decrease the user capacity, which isn't also desired. The noise charactersitic of a CATV-network where we perform multi-user communication will vary dependent on the geographical location of the network, the type of the network (star type ... ) etc ... The performance of the OS-COMA system will not be the same in different networks. It has been shown that an adaptive predictor filter is a very good system that improves the narrowband interference suppressing capability of the OSCOMA system. Suppression of narrowband interference with a one-step predictor is based on the principle that 'white' noise Pseudo Noise-sequences are unpredictable. The constructed interference can be subtracted from the simultaneously accessing signals that is corrupted with these narrowband components. Because of the adaptive nature of the prediction filter we are able to cancel the narrowband interfering components. The adaptive prediction filter doesn't require a priori knowledge of the amplitudes of the interfering components, the frequencies of the components, the phases of the components and even a priori knowledge of the number of interfering components is not required. This is shown with the performed simulations. These properties of the prediction filter allows the system many more users than without this filter. To use the suppressing property of the one-step predictor optimally, we hould have white sequences which implies a flat spectrum having a Gaussian character. The whitened PN-sequences have been designed according to the Spectral Factorization Theorem with better cross-correlation properties as is shown with simulations. This whitening will automatically improve the Bit-Error-Rate (BER) performance of the OS-COMA communications system through lowering the Multiple Access Interference (MAl) or allows many more users. One of the problems by the OS-COMA communications system is the MAl. In most cases and also in literature by the analysis of the BER performance of the OS-COMA system is assumed that Multiple Access Interference by using the conventional PN (Preferentially Phased-Gold)-sequences is a white Gaussian process which is not. The conventional correlation receiver has been designed to detect a signal optimally in white noise. Because of this property of the MAl the conventional correlation receiver will not operate optimally. A system that can suppress the MAl has been shown to be a Wiener filter OS-COMA receiver.
-19-
Leerstoel Elektronische Bouwstenen
-21-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
B. Jacobs Rapportnr.: EEA-538 23 april1998 Modelling an A1 0.2Ga0.8As-ln 0.15Ga0.esAs-GaAs pHEMT using ATLASII ir. J.S. Wellen, dr.ir. Th.G. van der Roer prof.dr.-ing. L.M.F. Kaufmann
Summary High Electron Mobility Transistors {HEMTs) are important electrical components in contemporary systems. These transistors are especially useful in devices which must operate at high frequencies. In this report the behaviour of an AIGaAs-lnGaAs-GaAs pHEMT was analysed using the twodimensional device simulator ATLASII. The analysis included a quasi-equilibrium {Vds=OV), static DC and AC analysis of the pHEMT. In order to obtain reliable results the original epitaxial structure had to be changed. As ATLASII cannot simulate tunnelling effects the abrupt AIGaAs-lnGaAs junction had to be replaced by a graded junction. The quasi-equilibrium behaviour as predicted by the charge control model was analysed both quantitatively and qualitatively. The charge control model gave accurate predictions of the sheet carrier density when the voltage dependent Fermi energy level and the non-complete depletion of the AIGaAs donor layer were accounted for. The simulations of the static DC behaviour showed that the influence of the bulk current is very important for both the trans conductance and the output conductance. The parasitic MESFET current had little effect on the overall results. This was caused by an underestimation of the electron concentration in the AIGaAs layer by ATLASII. The simulations of the small-signal behaviour and experimental results showed identical behaviour. However, the simulated and experimental S-parameters differed considerably in magnitude. A small-signal model was presented which describes the microwave behaviour of the pHEMT. The component values of this model were determined using the Hewlett-Packard Microwave Design System {HP-MDS). With this software package the small-signal model was matched to ATLASII simulated S-parameters. Initial values for the matching procedure were generated using a low-frequency approximation of the small-signal model. The physical reliability of the capacitance values of this approximation was tested using simulations of the charge transfer inside the pHEMT. Only the drain-source capacitance proved to be unreliable. The matched component values did not differ considerably from the initial values except for the drainsource capacitance. This was caused by the large drain resistance associated with the absence of tunnel currents. It is the author's opinion that the tested device simulator ATLASII {version 2.0.0) is not suitable for simulating complex epitaxial structures. The most interesting effects cannot be simulated because of convergence problems. Furthermore, the device simulator was not easy to use and the documentation was inadequate.
-22-
Leerstoel Elektromagnetisme
-23-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.H.H. van Eeuwijk Rapportnr. EM-3-98 11juni1998 Modelling the impulse radiation antenna. dr.ir. A.P.M. Zwamborn (TNO-FEL} prof.dr. A. Tijhuis
Summary: The pollution of areas with large quantities of anti-tank and anti-personnel land mines, especially in countries of former armed conflicts, like Afghanistan, Angola, Cambodia, Iraq, Kuwait, Somalia, Vietnam and Yugoslavia, is a major problem. According to the Mine Clearance Planning Agency in Afghanistan, over a period of 15 years an estimated 20.000 civilians have been killed and 400.000 wounded by land mines in that country. The current rate is 4000 killed and another 4000 wounded annually, world-wide. This report describes a part of the current developments at TNO Physics and Electronics Laboratory (TNO-FEL} into the design of a Ground-Penetraiting-Radar (GPR) system to determine the possibilities of electromagnetic detection and classification of buried objects. These developments have begun in September 1995 with an internal project. The objective is study the feasibility of applying ultra-wideband (UWB} techniques and the singularity expansion method (SEM} to the detection and classification of buried objects. In a UWB system, like a ground-penetrating-radar system, a short electromagnetic pulse is transmitted towards the object of interest. The resulting scattered field then contains information about the target. One of the components of the GPR is the Impulse-Radiating Antenna (IRA}. The behaviour of the IRA is the main subject of this report. This behaviour is determinded by the electromagnetic field as a function of position and time. The report gives a detailed description of a numerical method to determine the currents on the antenna. When these currents are known, the radiated electromagnetic field can be calculated. Almost all elements of the antenna may be treated as infinitely thin perfectly conducting surface. A straightforward method to solve scattering problems for such surfaces is based upon the Electric-Field Integral Equation (EFIE}. For a general antenna configuration this integral equation cannot be solved analytically. Instead, a numerical solution procedure is applied that was developed by Rao, Wilton and Glisson. With the aid of this procedure, the scattering problem is solved for a perfectly conducting plate excited by normally and a grazing incident electromagnetic wave, for different dimensions of the plate. After validation of the numerical results thus obtained, the chosen numerical method is applied to the scattering by a perfectly conducting rectangular triangle and by one feed plate of the antenna under investigation.
-24-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
L.J.H. Hardy 11juni1998 Excitation of a high voltage cable. dr.ir. B.P. de Hon prof.dr. A Tijhuis
Rapportnr. EM-2-98
Summary:
In this report, the electromagnetic behaviour is described of the transient electromagnetic field in a standard coaxial cable and a coaxial cable with a helical shielding. Such fields are caused by partial discharges inside the insulation. The metal boundaries are assumed to be perfectly conducting, and the insulation of the cables is assumed to be a homogeneous, lossless dielectric. The helical shielding is perfectly conducting parallel to the wires and completely penetrable perpendicular to them. The domain surrounding the cable is filled with air. Maxwell's equations are first used to determine the general source-free solutions for piecewisehomogeneous circularly cylindrical configurations. The derivation involves Fourier transformations with respect to space and time. Applying the boundary conditions then leads to the characteristic equations for the coaxial cable. Since these quations are too complex to solve them analytically, a numerical method is applied. Subsequently, the electromagnetic field is transformed to the space-frequency domain, avoiding poles on the integration contour. The transformation to the space-time domain is then carried out by a straightforward FFT operation. Numerical results are presented for the modes that correspond to the
-25-
CAPACITEITSGROEP MEET & BESTURINGSSYSTEMEN
-27-
Leerstoel Meten en Regelen
-29-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.J.C. Berende 10 december 1998 Modelling of solid state fermentation: Koji fermentation. ir. A.J.W. v.d. Boom (TUE), M. Keulers (Unilever) prof.dr.ir. P.P.J. van den Bosch
Samenvatting: Een model voor 'solid state fermentatie' is ontwikkeld. Het model behandelt solid state fermentatie in het algemeen en Koji fermentatie in het bijzonder. Koji fermentatie maakt deel uit van het sojasaus produktie proces: de produktie van aroma's uit sojabonen en tarwe-zemelen. Tijdens Koji fermentatie groeit een schimmel in een substraat van soja-vlokken, tarwe-zemelen en water. De schimmel produceert enzymen die nodig zijn voor de volgende fermentatie stap in de soja-saus produktie. Schimmelgroei en enzymen produktie zijn beide afhankelijk van temperatuur en vochtigheid. Het belangrijkste probleem tijdens Koji fermentatie is de warmteproduktie, die veroorzaakt wordt door de schimmelgroei. Deze warmte moet verwijderd worden om groei-stimulerende omstandigheden te hand haven. De doelstelling van het model zijn: lnzicht verkrijgen in het produktieproces om schimmelgroei te maximaliseren. lnzicht verkrijgen in het proces om enzymproduktie te maximaliseren. De gevolgen kunnen voorspellen van veranderingen in het proces. Het model bestaat uit twee delen: Fysiologisch dee/. Dit deel relateert schimmelgroei en enzymproduktie aan temparatuur en vochtigheid van het substraat. Fysisch dee/. Dit deel relateert temperatuur en vochtigheid van het substraat aan temperatuur en vochtigheid van de omgeving.
Als beide model-delen bekend zijn, dan kunnen schimmelgroei en enzymproduktie gecontroleerd, en dus gemaximaliseerd worden, door de omgevingscondities te controleren. Bij dit onderzoek ligt de nadruk op het fysische deel van het model. Het behandelt de warmte- en vochtuitwisseling tussen het substraat en de Iucht die er doorheen stroomt. Het model dat ontwikkeld is, is een wit model, gebaseerd op fysische relaties. Er zijn experimenten gedaan om het model te valideren en te ontwikkelen en om de modelparameters te schatten. De belangrijkste resultaten zijn: Evaporatieve warmte-overdracht is het belangrijkste warmte-transportmechanisme. Conductieve warmte-overdracht kan verwaarloosd worden, mits de wanden van de fermentor gemaakt zijn van materiaal met een lange conductiviteit en de luchtsnelheid voldoende hoog is. Schimmelgroei is exponentieel. Temperatuur afhankelijkheid is gekwantificeerd. Om de temperatuur tijdens schimmelgroei te kunnen controleren, moet de luchtsnelheid verhoogd worden, terwijl de luchtvochtigheid hoog gehouden wordt.
-30-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoog leraar:
W. de Boer 10 december 1998 Active magnetic bearings: modelling and control of a 5 DOF rotor. ir. V. van Acht prof.dr.ir. P.P.J. van den Bosch
Summary: For doing research on magnetic bearings, levitation and driving to realise a deflection unit for a 3D laser interferometer, an active magnetic bearing system from the Swiss company MECOS Traxler AG is used. The shaft is borne in five degrees of freedom magnetically and the sixth degree of freedom is driven by a motor. The most simple model for an active magnetic bearing system is a small magnetically conducting ball actively levitated by an electromagnet. Displacements of the ball are measured with a sensor, the sensor signal is the input to a controller. This controller controls the current fed to the electromagnet, this closes the loop. When the current is increased, the magnet force increases which makes the ball move towards the coil, this makes the air-gap between the ball and the electromagnet smaller which makes the force even stronger. An active magnetic bearing system is unstable in open-loop. When this simple model is extended with a second electromagnet the total force acting on the rotor is a function of the displacement in the nominal air-gap and the current. The two poles of the system are symmetric around the imaginary axis and both on the real axis, when examined in the s-domain. This model can be extended to a fully levitated shaft with six degrees of freedom, five of them magnetically borne. The motions in axial direction can be decoupled from the motions in radial direction. Four degrees of freedom are left for which a model is set-up and a controller is designed. For high rotation speeds the gyroscopic effect (the coupling between the two rotations about the axes perpendicular to the rotation axis of the shaft) becomes more important. When a complete state space description is set-up the direct transfers consists of four poles and two zeros. Two pole/zero pairs are caused by the non-collocation of the sensor and actuator. Identification of these pole/zero pairs is very difficult. The other two poles are the result of the magnetic levitation. The identification of an unstable non-linear multiple input multiple output system is hard. Especially when the excitation can be done at one input at a time, only sinusoidal signals can be used and there is not enough memory to store more output signals at a time. However, measuring the closed-loop frequency responses, calculating the open-loop response with the knowledge of the controller and including the knowledge about the steady state gain makes it possible to identify the direct couplings in frequency domain. The indirect couplings can be achieved with the knowledge of the white box modelling. For an active magnetic bearing sytem it is useful to use robust controllers, which can handle different loads and rotation speeds. To suppress the gyroscopic couplings the best thing is to use a full multiple input fmultiple output controller. Since it was not possible to implement in the practical set-up, a multiple input multiple output controller was designed for only the direct couplings and the couplings within the plane. Also this kind of controller seemed to be very promising. When single input single output controllers are designed for multiple input multiple output systems special attention should be pain to the indirect couplings. These can cause the system to become unstable.
-31-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M. van Duijnhoven 10 december 1998 Time optimal position of an induction motor based on the dynamic contraction method. dr.ir. M. Blachula prof.dr.ir. P.P.J. van den Bosch
Summary: Due to the decreasing prices in conductors the induction motor is getting more and more popular for controlled speed or position applications. The induction motor is a nonlinear system and not all state variables are necessarily measureable. The parameters, especially the rotor resistance, vary significantly from their nominal values. The motor equations are given by:
dB
-={J)
dt
Jdw dt
=T
-T.
e
I
With J inertia, Te electrical torque, and T1 load torque. A motor has several electrical and mechanical limits, which can be translated to speed and acceleration limits. Field oriented control is applied to decouple the control inputs. The first problem is to control Te. This is a function of different parameters which are nonlinear dynamically coupled. The Dynamic Contraction Method (DCM) is applied to the AC-motor to control Te. This is a method similar to Nonlinear Inverse Dynamics (NID) but it can be used in systems with uncertain parameters. A comparison between PI and DCM was made if the input was stepwise. In this case the DCM controller was superior. After controlling the current and flux with DCM controllers, the motor is described by a two integrator problem with an extra time constant. The goal of this report is to solve this system as a pure two integrator problem without taking resort to the two integrator problem with an extra time constant, which is considerably more difficult to solve and implement. The time optimal solution consists of two intervals of maximum positive/negative electrical torque. This results in maximum acceleration/decelaration. If the position error is large enough there is a third interval with maximum speed and zero acceleration. If the system can be approximated by a two integrator problem depends on the motor parameters. As will be shown our motor can be approximated by the two integrator problem. Two control structures, a non-feed forward structure is superior for larger position errors if torque is not known. This last structure is compared with a control structure from [1]. The article control structure had some disadvantages. Our current and flux controller can use higher gains which result in smaller steady state errors and higher robustness. As last part of this thesis observers are applied tot the motor. The most problematic part is the estimation of the flux angle, if the rotor resistance is different than assumed. All other estimations depend on the proper observation of the flux angle.
-32-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
F.A.A. Engelen 27 augustus 1998 Online NADH fluorescence measurements in an oscillating yeast culture prof. Kuriyama, M. Keulers (Unilever) prof.dr.ir. P.P.J. van den Bosch
Summary:
At the Biochemical Engineering Laboratory of the National Institute of Bioscience and Human technology in Tsukuba, Japan, autonomous metabolic oscillations in yeast curltures are studied. 40 to 80 minute oscillations occur when Saccharomyces cervisiae IF0-0233 yeast is cultivated using ethanol, glucose, or acetaldehyde as carbon source. During these oscillations, ethanol is converted into acetaldehyde. To convert ehtanol to acetaldehyde, NAD is converted into NADH. To understand these oscillations NADH measurements are required. Previous attempts to analyze the amount of NADH with a complex sampling technique failed. A fluorometer can provide the necessary information about the NADH concentration in the yeast culture. The fluorometer uses the fluorescence property of the NADH compound. When NADH is exposed to light of 365 nm, the atoms become excited. When the electrons fall back to their original energy level, light of a longer wavelength (460 nm) is emitted. The intensity of the emitted light represents the amount of NAOH present. The excitation light is guided into the solution through an optical fiber. The excitation light is scattered by the yeast particles in the solution and the reflected light and the emitted light is guided back by as econd optical fiber to the photo multiplier tubes. This optical way to measure the concentration of a compound is very sensitive to any disturbance in the solution. Air bubbles and other kinds of disturbances, absorb the excitation light and influence the measurements. An analog to digital converter is used to digitalize the analog signal of the fluorescence, reflectance and the light intensity and a program is written to store the data and to integrate the data with the data acquisition system. A lamp compensation unit on the fluorometer provides important information about the light intensity of the lamp and compensates the fluorescence and reflectance signal for long term drifts. The extra information from the lamp compansation unit appeared to be very useful to distinguish signal changes from light intensity disturbances. Techniques like signal drift compensation and smoothing are described to analyze and interpret the data from the fluorometer. Tests with different cell densities show correlations's between the reflectance signal and the cell concentration that can be used to monitor the cell growth during the fermentation process. pH changes show influences on fluorescence and reflectance signals. Fluorescence measurements in nonoscillating yeast cultures, show an increase of the fluorescence signal under nitrogen aeration and a decrease of the fluorescence signal when oxygen is used for aeration. This observation is theoretical explainable and proves the working of the fluorometer. Experiments are carried out to find the optimal conditions to measure NADH oscillations. Finally, the system is able to measure the redox state of the NAD, NADH couple continuously on-line by measurement of fluorescence (365 ~ 460nm). Maximum NADH fluorescence is observed in each cycle as respiration slowed to its minimum (i.e. as dissolved oxygen reached a maximum). Fluorescence then decreased as respiration increased to its maximum rate. Recovery of NADH occurred in a biphasic manner as respiratory chain activity slowed, so that a secondary maximum of fluorescence was recorded between peaks of dissolved oxygen. The most important finding is that the basis mechanism of this oscillation is considered to be periodical inhibition of the respiratory chain. The carbon flux change during the oscillation was considered as the result of respiratory activity change through NADH/NAD balance change. Data from the fluorometer in combination with other data could provide new hypothesis of the oscillation mechanism. Based on this hypothesis mathematical modeling, biochemical analysis and genetic investigation will be carried out and the real mechanism of the oscillation is expected to be cleared.
-33-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
E. Enkor 23 april 1998 Lateral MIMO-control of a bus ir. D. de Bruin prof.dr.ir. P.P.J. van den Bosch
Summary: This project deals with the automatic control of the bus. The bus has to ride on a special road, a narrow lane which may not be accessed by any other traffic. To pre-define a reference trajectory some kind of a guiding system has to be used. In the road a (magnetic) guiding-line is placed to determine the lateral position of the bus with a magnetic sensor. Of course there are also other guiding systems such as, discrete markers along te road, vision systems (two cameras). The controller that has to be designed must deal with many situations during driving. If the bus has to make a curvature or it has to make a bus- stop and even in the presence of environmental disturbances like wind gusts or the condition of the road-surface (dry, wet, icy), the controller still has to cover in all this situations i.e. it must keep the bus on the track. We have designed a MIMO, Hoo-controller for the lateral position of the bus. To design such a controller the following steps has to be taken. First the dynamics of the bus are modeled. Therefore equations of motion of the vehicle (kinematics) and forces that occurs between road-tire contact are investigated. Some parameters of the bus are uncertain like the mass distribution (full, empty bus) or the road conditions mentioned above. All this gives rise to model perturbations and so different dynamics. The controller must then also stabilize the vehicle. After modeling the vehicle a suitable controller-form has to be chosen, and simulations have to be carried out in order to check of the design specifications are met, followed by some conclusions and recommendations.
-34-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.J.M.M.L. Franssen 12 februari 1998 Contour correspondences in an image sequence reduced by a moving camera Liu Hong, M.Sc., ing. W. Hendrix prof.dr.ir. P.P.J. van den Bosch
Summary: This report deals with one of the fundamental subjects of computer v1s1on: the so called correspondence problem. When an image of a 3-D scene is recorded by a camera the resulting image will be a projection of the three dimensional scene onto a two dimensional image plane. During the recording stage the depth information is lost. This depth information can be recovered by using e.g. stereo vision or structure from motion. Both methods use a sequence of images that show the scene from different points of view. In the recorded images, pairs of features can be found that are projections of the same feature in the original 3 D scene. When these pairs of features are found the depth information can be recovered by triangulation. The process of finding these corresponding pairs of features is called the correspondence problem. The effort required to find these pairs of features depends on the type of feature used. A simple feature such as a line segment is easy to extract from the images but searching for correspondence is difficult because there are many lines present in the images. By using a more complex feature the burden of finding correspondence is moved from the matching stage to the feature extraction stage. In this report, contours are used to search for correspondence. These contours consist of line segments that form closed planes. By using countours the number of features present in the scene is reduced. The critical factor using this approach is the completeness with which these contours can be determined from the available images.
-35-
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~----
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
----
B.M. Giebel 11juni 19988 H-infinity control of an overhead crane dr.ir. A.A.H. Damen prof.dr.ir. P.P.J. van den Bosch
Samenvatting Dit verslag handelt over een regelaar voor een container kraan. Een container kraan bestaat uit een verrijdbare trolley op een rails met daaronder een last bevestigd aan de trolley via een in lengte varierende kabel. Om dit systeem te regelen zijn de niet-lineaire en lineaire systeem beschrijvingen van de last en de trolley afgeleid. De werkelijke waardes van aile parameters die in het systeem gebruikt worden, worden gegeven. Omdat bij de hijsinrichting gebruik is gemaakt van een speciale overbrenging, namelijk een wormwiel, behandelt dit afstudeerverslag ook een beschrijving en een dynamisch model voor dit onderdeel van het systeem. Hierna wordt er op het ontwerp H"'-regelaar en een klassieke PID regelaar met niet lineaire terugkoppeling ingegaan, die het systeem regelen en stabiliseren. Het ontwerp van de diverse filters voor H""techniek wordt in detail behandeld. Natuurlijk ontbreekt een evaluatie van de simulaties en testen op het echte systeem niet. Het echte systeem bestaat uit een schaal model van een container kraan beschikbaar op de faculteit meet en regeltechniek van de technische universiteit Eindhoven. We zullen zien dat de H""-regelaar aileen stabiel is voor het nominale proces (het proces met vaste kabel lengte). De PID-regelaar is stabiel voor aile kabel lengtes.
-36-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.J.M. de Jongh 23 april 1998 Controlling the reticle stage of a lithographic wafer-scanner dr.ir. A.A.H. Damen, dr. H. butler, ASML prof.dr.ir. P.P.J. van den Bosch
Summary: The waferscanner is a tool for the production of semiconductors. The structure on a mask, called reticle, is transferred to the wafer by a lithographic process. The mask and the wafer must be moved through a rectangular beam of laser light. This scanning principle demands high precision. To achieve high throughput also high velocities are necessary. Principles of the position control design of the reticle stage are studied in this document. Main aim of the document is to highlight the advantages and disadvantages of MIMO controllers. The reticle stage has six degrees of freedom. To keep the assignment transparently, only three degrees will be considered. A simple model of the reticle stage, consisting of two masses and a flexible coupling, will be used. Three SISO PID controllers with notch filters will be used as reference. Until now the waferscanners have been supplied with SISO controllers for every degree of freedom. This thesis will treat the advantages and disadvantages of several MIMO controllers. Three different design methods will be studied: LQG, Hoo and 1.1. analysis I synthesis. They will be applied to the feedback controller design. The feedforward controller has sufficient performance, so this part of the control remains unchanged. The LQG controller shows high performance but lack of robustness. The observer allows adaptation of the controller. The Hoo controller suffers from conservatism. The magnitude of the model uncertainties are very high. This unstructured approach results in bad performance. The last control design by 1.1. analysis I synthesis, is superior to the other controllers. The 1.1. controller achieves an appropriate trade-off between high performance and sufficient robustness. Furthermore, a different approach of gain scheduling and gain balancing is suggested, leading to improved performance.
-37-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.L.H. van Laer 12 februari 1998 Modeling and controlling the water and stock circuit of a paper machine dr.ir. A.J.W. v.d. Boom, ing. G. Hees (Roermond papier), ing. E. Pauw (Siemens) prof.dr.ir. P.P.J. van den Bosch
SummaryL The raw material of the paper production process at Roermond Papier is 100% waste paper. At the stock preparation the waste paper is dissolved in water. After the stock (paper fibers in water) is cleaned and screened at the stock preparation, it goes towards the actual paper machine with a consistency of 1.5%. This means the stock contains 1.5% paper mass and 98.5% water mass. At the paper machine the paper sheet is formed and the water within the paper sheet is extracted. The consistency or dryness of the produced paper is 93%. The at the paper machine extracted water from the paper sheet, goes back to the stock preparation to dissolve the waste paper again. The thesis assignment is to construct a dynamical model of the water and stock circuit and to design one or more controllers to optimize the water and stock circuit behavior. A white-box model of the stock preparation is created. The white-box model contains three major components of the process. They are the chest, the local PID-controller and the actuator. To build a valid white-box model, the dynamical behavior of these components has to be described. The dynamical behavior of the chests and the local PID-controllers is known. However, we failed to describe the non-linear dynamical behavior of the actuators. The main reason was the lack of flow sensors available. In order to avoid the unobservable non-linear dynamical behavior of the actuators, a black-box model approach is started. The in- and outputs of the black-box model were selected in such a way that a supervisory controller would be able to control the water and stock circuit. The inputs are setpoints of local PI D-controllers, which reduce the non-linear behavior of the actuators. The underlying local control strategy is of high importance and has to be chosen carefully. Four possible control strategies are described. The expected performance of the black-box model and its controller dit not fit in the existing production process. A change of control strategy is suggested. The suggested control strategy is primary flow controlled and secondary level controlled. A non-linear flow controller is introduced to maintain a steady as possible flow throughout the stock preparation. Using flow controllers makes it possible to give a stock demand order to the stock preparation by a supervisory controller. The supervisory controller determines a stock production setpoint for the stock preparation and also for the water flow control form the water circuit towards the waste water treatment. The supervisory control strategy depends on the actual production situation. It is stated that the flow control strategy with its supervisory controller has many advantages. The behavior of the water and stock circuit will improve he quality of the stock leaving the stock preparation towards the paper machine will be steadier The waste water treatment has a steadier load which will improve cleaning efficiency Possible energy savings. However, to have the best performance of the suggestud control strategy it is necessary to install extra water buffer capacity.
-38-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.A.L.M. Lambregts 23 april 1998 Modelling and control of a system with magnetic hysteresis ir. Y. Boers prof.dr.ir. P.P.J. van den Bosch
Summary:
In this report research is done on the effects of magnetic hysteresis on a control system. A number of different hysteresis models are being investigated. To illustrate the effect of hysteresis on a control system we will consider a magnetic levitation system. A controller for such a system is designed and the effect of hysteresis on a control system is evaluated.
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
S.J.H.C. Linssen 27 augustus 1998 Sensor and control development for laser tracking system. dr. A. Damen prof.dr.ir. P.P.J. van den Bosch
Summary: A laser tracking system is used to track the tool center point (TCP) of a robot. A laser beam is pointed to the center of a mirror and deflected to a retroreflector attached to the TCP. The beam coming from the retroreflector is guided to a position sensing detector, which output is used to adjust the tilting of the mirror to keep the beam in the center of the retroreflector. The mirror is beared on an air cushion and can be tilted by the forces of three actuators. Therefore, there is no mechanical friction and in particular no slipstick. For calculation of the retroreflectors position the tilting of the mirror is determined in two directions. This is done by an inductive sensor. However, the angles are only reliable if the air gap of the mirror is constant. The air gap is measured by a capacitive sensor. Two controllers are needed: an air gap controller and an angle controller. The study described in this report concerns the improvement of the sensors and the design of the air gap and angle controllers. In particular the electronics for the angle and air gap sensors have been improved. Both are based on a LVDT-signal conditioner (AD698). In this way drift of the oscillator amplitude doesn't influence the measurement. To minimize the influence from temperature variations of the primary coil used for the angle measurement, the coil was optimized within the mechanical constraints and a circuit was developed to approximate the voltage over the inductance of the coil, needed by the LVDT. The dependence of the air gap transfer function on the air gap was determined with a frequency response measurement. The measurements revealed several resonance frequencies, not predicted by the simple model available, most of them depending on the air gap. An air gap controller was designed, implemented and tested on the real system using a DSP system. The controlled bandwidth was reached by choosing an adequate air gap, resulting in a shift of the resonance frequencies to higher frequencies. After design and implementation of the angle controllers the tracking system was tested and found able to perform the aimed task.
-40-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.C.H. van de Meerakker 23 april 1998 Target height estimation in multipath using monopulse complex angle ir. Y. Boers, J. Brouwer, HSA prof.dr.ir. P.P.J. van den Bosch
Samenvatting: Als een monopulsradar een laagvliegend doel volgt, wordt de doelshoogtemeting verstoord door multipad-effecten. In theorie kan in die situatie de doelshoogte bepaald worden met behulp van de complexe hoek van het monopulsquotient. Een verandering wordt aangebracht in de zendfrequentie en uit de resulterende faseverandering van het monopulsquoti~nt kan de doelshoogte berekend worden. De afstudeeropdracht was het onderzoeken van randvoorwaarden waaronder deze methode bruikbaar is voor de monopulsradars van Signaal. Daartoe is een aantal foutenbronnen gemodelleerd, te weten: antenne mispointing, glint, diffuse reflectie, quantisatieruis en faseongelijkheden tussen delta en som kanaal. Uit analyse en simulatie volgde het volgende: bij ruwe zee en doelen met hoge elevatie Ievert diffuse reflectie problemen op. Bij grote doelen Ievert glint problemen op. Faseongelijkheden leveren systematische fouten op. Antenne mispointing Ievert soms doelverlies op. Quantisatieruis is bij de beschouwde radar geen belangrijke foutenbron. De invloed van glint, diffuse reflectie en faseongelijkheden kan verkleind worden door een zendrequentiestap te kiezen. Tegelijkertijd moet met de keuze van de zendfrequentie doelsverlies door mispointing vermeden worden. Laagdoelmetingen uit 1991 zijn gebruikt om de methode te testen. In bepaalde situaties geeft de methode de goede doelshoogte, in andere zit er veel ruis op. De resultaten zijn grotendeels verklaarbaar vanuit de gedane analyse. De belangrijkste conclusie is dat als problemen zoals faseongelijkheden opgelost zijn, de bestudeerde methode voor doelshoogtemeting zinvol kan zijn bij het volgen van kleine, lage doelen op rustige zee.
-41-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. Ossewaarde 27 augustus 1998 Sorting of plastics by means of Near Infra Red Imaging Spectrometry .. L. Hong M.Sc., dr.ir. J.A. Hegt prof.dr.ir. P.P.J. van den Bosch
Summary:
This report summarises the work developed during a ninth month longMaster Thesis Trainee Ship at Mountside Software Development BV (TheNetherlands). In the first three months a feasibility study directed to the recycling of plastics from (municipal) household waste streams has been = completed. Plastics are a high-value material, but sorting by hand is unpleasant, not accurate, expensive, and above all unhygienic. Therefore, an automatic sorting concept is advocated. Two sorting steps have been defined: 1. Sorting Step I - Discrimination Sorting Step II- Discrimination between plastics mutually. between plastics and non-plastics 2. At the moment only Sorting Step II seems feasible in economical sense, and the second part of the work must investigate the technical feasibility of the use of Near Infrared Spectrometry (NIRIS) as an identification method. The feasibility study outlined the many advantages of NIRIS compared to other techniques. NIRIS is a robust, a cheap, and a non-destructive technique with many applications in other areas such as agriculture, pharmacy, and process control. The main disadvantage of NIRIS are black and transparent objects, they can not be identified. The report must answer one main question, is it possible to identify plastics by means of NIRIS with a fast measuring device in the spectral range from 950 - 1700nm? Before we were able to answer this question, we put a list with requirements together. We studied the near infrared measuring technique,and the way in which we should process the spectral images to get all essential information. Thirdly, we extracted most relevant features by means of principal component analysis (PCA). Finally, we selected the Probabilistic Neural Network (PNN) as the most appropriate solution to meet all requirements. Theoretically, the solution meets all requisites, but due to lack of time not all of them could be verified. The presented solution is able to classify at least seven plastic groups (ULDPE, HOPE, PVC, PP, ABS, Polyester, and PA4_6/ PA6) with an accuracy of approximately 100% and a recognition rate of 78%. The classifier recognises new yet unknown plastics, and is able to learn them. The combination of the PCA algorithm and the classifier allows optimal feature extraction. Unfortunately, the sorting system is not able to classify mingled plastics correctly. The effect of misclassifications on quality should be investigated more thoroughly. For instance, what quality is acceptable for certain plastics, and is it possible to tune the classifier a little bit more to meet these quality measures? Furthermore, more spectra should be measured to obtain more statistical evidence on sorting accuracy. Finally, an identical device in the spectral region from 900 to 2400nm will be available very soon, and its use will most likely alleviate the problem of misclassifications.
-42-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.T. Tahang 15 oktober 1998 Vision lnspectin for Moulded Glass ir. P. Dunias, ir. W. Hendrix prof.dr.ir. P.P.J. van den Bosch
Summary: In glass industries, a great deal of interest has been shown to deliver defect-free products to customers. In order to achieve this effectively automated inspection has been applied. Two methods that are widely used in glass container manufacturing are moulded and melted methods. The form and the size of moulds used in moulded glass manufacturing will determine the dimension and geometrical form of moulded glass products. The form of moulded glass containers can take any form following the form of the moulds. They may have combination of round and non-round geometry. The objective of this work is the development of an automated three-dimensional vision inspection for moulded glass. First, a bibliographical study of different 3D measurement techniques has been done. Unfortunately most of the techniques are based on the assumption the object to be inspected has Lambertian surface, which is not satisfied by moulded glasses. Other limitations are missing parts/occlusion problem, computational complexity, time consuming data acquisition, limitation to line structured scene, limited spatial resolution and the high cost of the system. To avoid these limitations, two simplified methods are proposed. First, an integrated 3D inspection based on selective diameter views by using a position mechanism and the second method is a 3D inspection based on multiple diametewr samples using rotation table. The second mthod is chosen to be implemented because it is much easier to setup using standard vision hardware configuration, and it can handle more variations of non-concave geometrical forms. The mthod has been implemented using the C programming language on a real-time operating system using a development system and hardware configuration provided by VIMEC AVT. The system is able to extract required three-dimensional measurement of round and non-round moulded glasses. The system is flexible to inspect various geometrical form and range of size, independently on color of the products. Using the system to inspect moulded glass products with an allowed tolerance of 1 mm which consist of ellipse, circle and irregular polygon geometry, a standard deviation of 25 11m or better in measurement results can be achieved.
-43-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.J. van Wesenbeeck 12 februari 1998 Digitalization of the gradient amplifier control loop of an MRI scanner ir. W. van Groningen, Philips Medical Systems, ir. D. de Bruin prof.dr.ir. P.P.J. van den Bosch
SummaryL Within the framework of the Masters project at the Eindhoven University of Technology, Faculty of Electrical Engineering, in association with Philips Medical Systems, a feasibility study on the digitalization of the gradient amplifier control loop in an MRI scanner has been carried out. A description of the analog control loop and its components is given, and a design criterium is presented. The analog control loop is extended to a digitalized control loop. Because of the delay time introduced by the fact that the controller is a sampled system, the transients behavior deteriorates. Some general expressions for the final values of the current error and its integral are derived. These expressions show that the current error integral is non-zero in most of the cases, affecting the criterium. For the sampled pulse width modulator, algorithms for three sampling rates are given. Higher sampling rates result in less delay in the control loop. Feedforward seems to be a solution to many problems. It is shown that the criterium is automatically met in steady-state for proper choice of the feedforward filter, and also during the presence of transients it can be met if the feedforward filter results in a linear phase characteristic of the feedforward path. Furthermore, the range of the current error is reduced considerably, resulting in a gain of four bits for the ADC in the error path. In the last chapters of the thesis, a study is carried out on the digitalization of the pulse widths of the pulse width modulator. It seems that by choosing an appropriate rounding scheme for the switching times, the accuracy can be improved. High accuracy, however, can only be achieved for low sampling rates, because the quantization error on the pulse width is distributed over a longer time in that case. Digitalizing the pulse widths causes the same effects to occur as when a DA converter is placed in the control loop. Drifts and oscillations of the current error and its integral are the result. A remedy against this is the use of a noise shaper, based on a positive quantization error feedback. The best performance is achieved for the highest sampling rate of the PWM, four times per cycle.
-44-
Leerstoel Signaalverwerking
-45-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
L.J. van Bokhoven Rapportnr.: ESP-02-98 12 februari 1998 Real-time colour image processing on a VLIW image processor Digitale Communicatie. dr.ir. M.J. Bastiaans, dr. G.J. Rozing (Oce) prof.dr.ir. W.M.G. van Bokhoven
Summary: Until now image processing in a digital copier is realised with application specific integrated circuits (ASICs). On account of the ever decreasing time-to-market it makes sense to explore other means to perform image processing in a copier. Image processors, for example, are fully programmable chips, optimised for image processing. At this moment image processors connot compete with ASICs, because they have insufficient processing power. Investigation shows that in the near future image processors will be powerful enough to compete with ASICs in office copiers. This report discusses the implementation of an image-processing path of a colour copier on an image processor (Imagine 60 MHz of Arcobel Graphics B.V.). In addition a new approach is presented to remove the coloured film experienced in black text in copies. Removing this coloured film improves the quality of a copy significantly. Finally, a method to further accelerate image processing (based on the context of an image) is discussed. The realised image processing sustains 4.4 full-colour A4-pages per minute (4.4 ppm) and reflects the quality of copies made by state of the art colour copiers. It performs * histogramming *appropriate filtering to reduce aliasing and moire * a colour conversion * unsharp masking * removal of screens with high principal frequencies * separation to printer colours * half toning.
-46-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Onderzoekthema: Begeleiding: Afstudeerhoogleraar:
M.T.J.J. de Chateau Rapportnr.: ESP-03-98 12 februari 1998 Analyse en implementatie van een akoestisch looptijdmeetsysteem. Digitale Communicatie. dr.ir. A.C. den Brinker, ir. W.H.J. van Schaik (Innovation Handling) prof.dr.ir. W.M.G. van Bokhoven
Summary: Voor het meten van de gemiddelde temperatuur over lange, eventueel door kleine obstakels onderbroken trajecten schieten conventionele manieren van temperatuurmeting tekort. Met het gebruik van temperatuurbepaling door middel van akoestische looptijdmeting wordt hiervoor een uitstekend alternatief geboden. Een belangrijk toepassingsgebied is de glastuinbouw, waar klimaatbeheersing een grate rol speelt in de (stook)lkostenbesparing. De looptijd wordt afgeleid uit de vergelijking tussen het ontvangstsignaal en het zendsignaal. Bij het doorlopen van het luchttraject door een akoestisch signaal manifesteren zich een tweetal effecten, te weten de aanwezigheid van reflecties en frequentieafhankelijke vervorming van amplitude en snelheid. Door middel van een uitgebreid literatuuronderzoek zijn deze effecten geanalyseerd. Met behulp van accurate modelvorming, die is verwerkt in het meetsysteem, kan de invloed bij de bepaling van de looptijd worden verrekend. Voor het realiseren van de berekeningen voor de signaalverwerking en voor de bewerkingen vanwege de verschillende modellen wordt gebruik gemaakt van een digitale signaal-processor (DSP). Rond deze bouwsteen is een eerste opzet van het meetsysteem verwezenlijkt. Deze opstelling wordt bestuurd vanaf een PC, waarop de ontwikkelde software is ondergebracht. Tot slot zijn er nog een aantal metingen uitgevoerd om de gevonden algoritmen te toetsen. Hierbij is gebleken dat de metingen door het systeem een grate mate van reproduceerbaarheid vertonen. Daarnaast zijn een aantal ijkgegevens bepaald. Om de gebruikte algoritmen goed te toetsen zijn er aanvullende metingen nodig, maar dan onder goed geconditioneerde omstandigheden. Deze waren op korte termijn niet voorhanden. Ook moet dan worden gemeten binnen het bereik zoals dat in de requirements van het systeem zijn vastgelegd.
-47-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
G.R.M. Hamm Rapportnr.: ESP-06-98 27 augustus 1998 Image compression in digital copiers dr.ir. M.J. Bastiaans /dr. G.J. Rozing (Oce) prof.dr.ir. P.P.J. van den Bosch
Summary: In modern digital copiers large amounts of image data have to be temporarily stored, moved, and processed while using as little system resources as possible. The data consists of two types of image information, binary images and grey-scale images. Data compression provides a way to reduce the amount of data. The price that has to be paid for this reduction of data is the higher demand on the available processing power in the copier. In this research, the possibilities and implications of image compression in digital copiers are explored. The emphasis lies on an imaginary copier using standard PC hardware. In this report, various existing image compression schemes are described. Both lossless and lossy compression schemes are considered. For both binary and grey-level, specific compression schemes are explored. Some compression schemes, aimed at binary images, are examined more precisely. Both the time necessary to compress and decompress images and the compression factor have been measured. These measurements were done on various images. Existing PC-Iibraries are used to perform the compression and decompression. Despite a partially unresolved artefact in the timing measurements, the advantages and disadvantages of specific compression schemes are determined. The results of these measurements are used to perform simulations on a software model of the copier. The goal of these simulations is to examine the effect of image compression on the copier's behaviour. The total time needed for the processing of a job as well as the load on the storage device have been considered. From these simulations, the packed-Bits compression algorithm was seen to provide the best solution. Furthermore, the Packed-Bits compression scheme has been implemented and added to an existing image processing program. This algorithm codes the image while it is generated by the image processing and stores it in a format compliant to the TIFF (Tagged Interchange File Format) standards. This implementation was found to be slightly slower than the library function.
-48-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R. P. deNier 15 oktober 1998 An improved MPEG-2 Advanced Audio Coder. dr.ir. P. Sommen, ir. Dillen (Philips) prof.dr.ir. P.P.J. van den Bosch
Samenvatting: The MPEG-2 Advanced Audio Coding (AAC) standard was completed in April 1997 and is the latest MPEG audio coding standard. The AAC standard is non-backwards compatible with earlier MPEG audio standards, which enables it to apply the latest coding techniques. The purpose of the research project is to develop the available AAC coder public software, in such a way that the audio quality achieved is comparable to that found in state-of-the-art solutions from companies which were more directly involved in the development of the AAC standard. The AAC coding scheme is a transform coder, and primarily uses the characteristics of human hearing in order to reduce the amount of audio information which needs to be transmitted. The public software, written in the C-programming language, is a simple implementation of the main coding tools used in the AAC coding scheme. The coding tools can be categorised into tools dealing with either the perceptually irrelevant audio information, which cannot be heard and thus do not need to be encoded, or redundant audio information which is removed. The coding tools involved in providing high audio quality were analysed, and the necessary improvements and additions were made to the public software. The perceptual model, used for determining the irrelevant audio information, has proved to be the most complex tool to develop. Listening tests were performed to ascertain the performance of the encoder. The listening test for mono audio has shown that a number of improvements will be necessary to obtain a high audio quality. However, it is not known if the state-of-the-art solutions are even capable of achieving the 'indistinguishable' (from the original) audio quality criterion, which is commonly used to grade audio coders. The listening test for stereo audio has shown that the quality obtained compares favourably with that found in the official stereo verification tests of the AAC coding scheme. There are certain problems which must still be addressed, but an 'indistinguishable' audio quality is nonetheless well within reach. The computational complexity and memory usage of the encoder must still be improved to make the encoder suitable for commercial, real-time applications.
-49-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleid ing: Afstudeerhoogleraar:
C.P. Valens Rapportnr.: ESP-07-98 27 augustus The fast lifting wavelet transform and its application to maritime radar image processing. dr.ir. M.J. Bastiaans I J.C. Huignard (Sodena) prof.dr.ir. P.P.J. van den Bosch
Summary: The project described in this report was originated by Sodena. Sodena is a French company that manufactures navigational software. Their main product is a so-called ECDIS {Electronic Chart and Data Information System) used on ships to navigate and manage a voyage. It can be seen as an electronic (sea) chart. An ECDIS can be connected to all kinds of data sources, but the connection to the ship's radar was still very difficult. The objective here is to overlay the radar image on the electronic chart, which would allow for intelligent image processing using the ECDIS database and geographical calculation capabilities. Overlaying radar images has always been a problem because of the amount of data contained in an image. However, using modern signal processors and state of the art mathematics it was thought that this problem could be solved. In this report we study the applicability of wavelet transforms to the processing of maritime radar images. We still concentrate on the compression of this kind of images using wavelet theory, but we will also investigate the removal of noise from these imgages with wavelets. The wavelet approach was chosen because of the progress reported in the scientific literature with respect to exactly these two topics. If wavelet theory can be successfully applied in other fields. Then maybe it can be equally succesfully applied to our field of applications was the line of thought here. To carry out this project we first take a look at the maritime radar to see what type of signal we can expect. With this information in mind we build a radar image digitizer and processor so that we can apply the wavelet theory. The study of wavelet theory is the next step in our project. We enter the theory at an abstract level and work our way up to a readily implementable wavelet transform based on the integer lifting scheme. When we have implemented the fast lifting wavelet transfrom, FLWT, we conduct experiments to find a good configuration for our system. We implement an embedded zerotree wavelet coder to compress radar images. We try several wavelets and we develop a scheme to improve the visual image quality after lossy compression. We compare our results to a JPEG coder and finally we perform some denoising experiments. In the end we find that we can denoise and compress images at the same time using the integer version of the FLWT. If we also use the CohenDaubechies-Feauveau 9-7 wavelet, then the results are very good indeed, even for compression ratios up to 128:1. It seems that we can meet our objective.
-50-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
D.W. van Vugt 11juni1998 Multichannel Blind Adaptive Source Separation in the Frequency Domain. dr.ir. P. Sommen, ir. D. Schobben prof.dr.ir. P.P.J. van den Bosch
Summary: In this graduation report a frequency domain blind source separation algorithm is presented. The sources, e.g. two persons talking in the same room at the same time, are recorded by sensors, e.g. microphones. These observed signals are convolutive mixtures of the original sources, due to delays, echos and filtering by the room. These observed mixtures are transformed to the frequency domain in order to reduce the convolutive mixing problem to an instantaneous mixing problem in such a way that separation is performed for every freqeuncy bin. This frequency transformation is implemented using the overlap-save method. An algorithm developed for instantaneous mixtures is then modified for use in the frequency domain. The frequency domain algorithm is capable of separating convolutive mixtures. To improve the frequency spectrum of the outputs, a normalisation is introduced. This normalisation is placed outside the update cycle of the algorithm and is used only for optimising the frequency spectrum of the outputs. The updating is a learning rule based on an information theoretic approach, that achieves separation by maximising the entropy of the outputs, and thereby minimising the mutual information between the outputs. Other approaches to blind source separation are briefly discussed. The algorithm is suitable for separation of multiple sources. Although in this report only simulations have been done using two sources, the algorithm is not restricted to two sources. The developed algorithm is such that it can be implemented in real-time.
-51-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
G. Yntema ESP-01-98 12 februari 1998 Study of development platform for a multiprocessor DSP system. dr.ir. P. Sommen, ir. James prof.dr.ir. A.J.A. Vandenput
Summary: At the Philips Centre for Manufacturing Technology, section Professional Electronics (CFT I PE), a multiprocessor board has been developed for real-time MPEG2 audio processing. Software development for this hardware, however, has proven to be very difficult. Software developers have to take care of all different aspects of developing software for a multiprocessor platform, ranging from the allocation of tasks onto processors, adding communication between the tasks to the making of tools to assist the developer with estimating processor loads. This report investigates methods to make software development for multiprocessor hardware platforms more abstract. Two development platforms for multi processor DSP systems have been studied, Virtuoso Classico and Grape-11. The Virtuoso Classico development platform consists of a distributed real-time operating system and uses a virtual single processor principle. The source code is written as if it were for a single processor while Virtuoso Classico generates the code for the individual processors. Grape-11 is an interactive tool that takes a graphical representation of the data flow model of an application as input, together with a description of the hardware platform. Grape-11 accounts for the allocation of tasks, the scheduling thereof and adds the necessary communicat!on principles between them. Because Grape-11 generates the scheduling scheme during compile-time, it does not need a real-time operating system, hence avoiding the overhead involved with a kernei.Grape-11 proved to be a tool for realising mainly synchronous data processing applications containing little or no deterministic parts. Virtuoso Classico, on the other hand, is well suited for a broad range of (a)synchronous applications and data dependent processing, due to its kernel based architecture. Both development platforms are described in detail. This master thesis project has resulted in a version of Virtuoso Classico that is ported to the SHARC board. Tests have been performed to determine the overhead introduced by Virtuoso Classico. This overhead consists of the following: * Memory usage as a result of the kernel size * Processor-time usage of task switches and other run-time overhead *Communication overhead All overheads have been determined to be able to predict the overheadwhen developing applications using Virtuoso Classico. Conclusions andrecommendations have been formed for Philips CFT/PE about the use of both development platforms. However, additional tests for studying the multiprocessor capabilities of Virtuoso Classico need to be performed.
-52-
Leerstoel Medische Elektrotechniek
-53-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleid ing: Afstudeerhoogleraar:
S. van Geloven 11juni 1998 Korte termijn belastingsvoorspelling met neurale netwerken. dr.ir. P.J.M. Cluitmans prof.dr.ir. P.P.J. van den Bosch
Summary: This thesis deals with a project on short term load forecasting with neural networks. It concerns the forecasting of electrical load for several days in advance, which are done on the basis of historical load, weather and calendar variables. The research here is focussed on three main issues. These are the neural network's learning algorithms, its topology and the selection of input variables to do these forecasts on. This project is divided into three parts and the results of each part are discussed next for each chapter. Part I is on short term load forecasting with neural networks. It starts with chapter 3} that describes neural networks which could be used for short term load forecasting. Then there is chapter 4 on short term load forecasting with neural networks. From literature states according literature that performances can be met between 2 % and 4 % MAPE (mean absolute percentage error). In this part is also chapter 5 which discusses a short pilot project at NUON, a Dutch electricity utility. A lot of experience regarding short term load forecasting was gained here. The forecaster in use at NUON has a performance of 5 % MAPE. Furthermore, a large amount of load, weather and calendar data from this pilot project was made available to do this thesis on. Part II is on data analyses and simulator development. Chapter 6 discusses the data analyses on the data which had been made available by NUON. The outcomes are that the correlation coefficient between certain signals and the load which is to be forecasted results into a good impression of which signals can useful as initial input variable selections for the neural networks. Chapter 7 describes the functional design of the neural network simulator to learn the neural networks. The development of this simulator is based on an existing neural network library. This let unfortunately to all kinds of restrictions and excessive long simulation times. Part Ill is on the development, optimisation and validation of the forecaster. Chapter 8 discusses the results for the different neural networks that were learned. For the performances, see the last paragraph of this summary. Other major results are stated below. It appeared that random initialisation of several processes in the learning algorithm has a large impact on the final performance, sometimes +!- 1.25 % MAPE. It also appeared that an increase of the number of hidden neurons and the number of hidden layers did not lead to higher performances. The input variable selection for good neural network short term load forecasters should at least contain the following: cyclic time intervals, global weather information and load of 186 hours (one week) ago. The indication of special days and holidays is also very important to apply. But the neural networks in this thesis could not copy well with this information, therefore resulting into very low performances for special days or holidays. Chapter 9 is on an unsuccessful attempt to optimise the best forecasters of the previous chapter by means of a genetic algorithm. Chapter 10 is on actual forecasts for a period of a week at NUON. At the end of this thesis is chapter 11 in which conclusions and recommendations of this thesis are. It could be concluded that the problem of short term load forecasting is a very complex problem and very time consuming to assess with neural networks. However, several forecasters have been learned with random search and error back propagation resulting into a performance of 7 % MAPE. This was eventually optimised by a combination of these algorithms to 6 % MAPE. A recommendation for any future research is that a hybrid system of a neural network and a statistical mechanism should be more suitable to cope with short term load forecasting especially regarding special days and holidays.
-54-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.F.A. Manders 23 april1998 Digitale filtering in de bepaling van cardiovasculaire variabiliteit bij neonaten. prof.dr.ir. P. Wijn, dr.ir. P.J.M. Cluitmans, drs. F. Bastin prof.dr.ir. P.P.J. van den Bosch
Samenvatting: De neonatale Intensive Care Unit van het Sint Joseph Ziekenhuis te Veldhoven is een gespecialiseerde afdeling waar ernstig zieke en veelal te vroeg geboren pasgeborenen gestabiliseerd worden en de nodige intensieve zorg ontvangen. Van deze pasgeborenen worden continu belangrijke fysiologische signalen gemeten, o.a. het elektrocardiogram, de arteri~le bloeddruk en het respiratie-, of ademhalingssignaal. Deze signalen dienen primair ter bewaking, maar kunnen secundair ook gebruikt worden voor onderzoek zonder dat dit een extra belasting oplevert voor deze patientjes. Ter bevordering van dit onderzoek is het PI NO-project (Physiologisch lnformatievoorzieningssysteem voor Neonataal Onderzoek) opgestart. Een onderdeel van dit project is het onderzoek naar cardiovasculaire variabiliteit. lndien het hartritme, de systolische en de diastolische bloeddruk in de tijd bekeken worden, dan blijken deze grootheden duidelijke variaties te vertonen. Deze variaties dragen de naam cardiovasculaire variabiliteit en worden voor een belangrijk deel veroorzaakt door de vele regelmechanismen in het lichaam. Met name het sympatische- en parasympatische zenuwstelsel hebben een duidelijke invloed op deze variaties. De stu ring door deze twee zenuwstelsels blijkt zich te manifesteren in twee verschillende frequentiegebieden, hetgeen maakt dat ze te onderzoeken zijn. De vermogens in deze verschillende frequentiegebieden worden als maat genomen voor de activiteit van het corresponderende type zenuwstelsel. Met name belangrijk is de verhouding tussen de variabiliteit in beide frequentiegebieden aangezien dit een maat is voor de balans tussen sympatische en parasympatische werking. Vooral bij prematuren geeft dit belangrijke informatie omtrent de ontwikkeling van het zenuwstelsel. Essentieel is nu de correcte vermogensbepaling in beide frequentiegebieden, hetgeen gezien de zeer korte beschikbare datasets een aantal complicaties met zich meebrengt. In het verleden is gekozen voor een frequentiedomeinaanpak waarbij een aantal problemen optreden, zoals spectral leakage, aliasing en met name een zeer onnauwkeurige bepaling van laagfrequente variabiliteit vanwege de korte datasets. Een andere aanpak met betrekking tot deze vermogensbepaling is een tijddomeinmethode. Het gebruik van digitale filters om toch de verschillende frequentiegebieden te kunnen beschouwen is dan onontkoombaar. Het doel van het onderzoek is dan ook te onderzoeken of deze laatste aanpak tot betere resultaten kan leiden dan de eerste. Met dat doel worden zowel de tijd- als frequentiedomeinmethode uitvoerig geanalyseerd en vergeleken. Het niet-equidistant zijn van de te analyseren datasets blijkt equidistante herbemonstering noodzakelijk te maken, hetgeen vervrormingen in het spectrum van deze signalen veroorzaakt. Correcties hiervoor blijken wei mogelijk in het frequentiedomein, maar slechts grof in het tijddomein. De gewenste filterkarakteristieken kunnen in het tijddomein goed gerealiseerd worden, zij het slechts met recursieve filters, met als gevolg een niet-lineaire-fasekarakteristiek, hetgeen geen invloed heeft op de vermogensschatting. Ondanks de minder goede correcties voor de door herbemonstering veroorzaakte effecten, blijkt de tijddomeinmethode tot significant betere resultaten te kunnen komen wat betreft de laagfrequente variabiliteit en vergelijkbare wat betreft de relatief hoogfrequente variabiliteit. Dit eerste is vooral te wijten aan het feit dat opsplitsen van de datasets in segmenten in het tijddomein niet nodig is om de betrouwbaarheid te vergroten, in tegenstelling tot de frequentiedomeinmethode, waarbij de hierdoor verminderde spectrale resolutie verantwoordelijk is voor de mindere resultaten.
----------------------------
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
B.J. Pellis 10 december 1998 Modellering van de baroreflex. prof. Wesseling prof.dr.ir. P.P.J. van den Bosch
Summarv: In this thesis we developed a model of the human circulation and short-term blood pressure control system. In the circulation model we approached all major segments as elastic tubes filled with fluid, which corresponds with the physiology. For most parts of the circulation, acting at relative low pressures, a linear elastance seemed to be suitable. However, for the aorta we used a nonlinear model and in the ventricles we used a time-varying elastance model. In the control model we focused on the baroreflex. Baroreceptors are modeled with a non linear transfer and first order dynamics. The effectors of the system are vagal and sympathetic heart rate control, peripheral resistance, venous unstretched volume and contractility control. All these effectors are implemented with a first order lowpass filter containing a time delay, time constant, gain and offset. To test the model behavior we performed several experiments. We looked at signals during free run experiments, and we also looked at open and closed loop gains of the system. The outcomes of these experiments show a strong similarity with former studies and with measurements done in humans. Furthermore we looked at blood pressure and heart rate variability, by introducing noise sources in our model. Looking at the calculated spectra we concluded that these spectra mimic the measured spectra, though no detailed study was done.
-56-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.H.A. van Steen 23 april 1998 Validation of muscle relaxation measurements. dr.ir. P. Cluitmans, dr. Blom prof.dr.ir. P.P.J. van den Bosch
Summary: The administration of neuromuscular blocking agents during surgery is directed to suppressing involuntary muscle movements in anaesthetised patients. Muscle relaxants are conventionally administered by bolus injections. This results in a failure to maintain steady relaxation levels. Continuous infusion of muscle relaxants leads to a more stable level of muscle relaxation. The work in this paper is aimed at the optimization of the existing measurement sustem, and on validation of measuements of muscle relaxation in order to develop in a leter stadium a closed-loop feedback controller for muscle relaxation. An improved version of the data acquisition part of the measurement system was developed. A new digital to analog conversion board was adapted to, interfacing to an integrated anaesthesia monitor was established and software was developed to collect, present and store the muscle relaxation measurements. The measurement method used in this work is the train-of-four (TOF) method with EMG sensors. The purpose of the validation algorithm is to detect measurements that are disturbed by artefacts. If the quality of a measurement is doubted, the algorithm should consider it invalid. The final goal is first to discard all measurements that contain artefacts, and secondly to avoid the discarding of valid measurements. Since there is very little knowledge about the 'correct' shape of the signals, knowledge was acquired by analyzing many parameters of the EMG signals. The 'heuristic' approach to validation, used in this work, may be summarized as follows: 1. A learning set and a test set of measurements were inspected by eye. In this way, a 'golden standard' was determined for the validation algorithm, and insight in signal properties and artefacts was gained. 2. A large number of parameters was chosen that are based on a single ECAP, on the rate of change between the ECAPs of one TOF, or on the rate of change between TOFs. 3. The parameters were calculated for every measurement in the learning set. The results were presented in histograms. 4. Suitable bounds for the parameters were determined. 5. The criteria were applied to the learning set and the results were compared to the visual inspection. 6. The algorithm was verified with a test set of measurements that is independent of the learning set. 7. If necessary, the algorithm should be optimized by repeating steps 2 through 6, using different test-sets each time, until the results are satisfactory. Steps 1 through 6 were carried out. Without the optimization step, the algorithm was able to detect circa 85% of all artefacts. A large number of measurements was incorrectly considered invalid and this number was just on the limits posed by the controller's needs in the steady state phase, and below the demands during the onset phase. Ways to optimize the algorithm are re-evaluation of the visual inspection, finding parameters that are still more independent of the level of muscle relaxation and tuning the threshold values.
-57-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. van Waterschoot 11juni1998 Niles, a speech controlled home command centre ir. W.H. Leliveld, ing. H.J.M. Ossevoort prof.dr.ir. P.P.J. van den Bosch
Summary: In this report, a design is presented to implement a stand-alone, speech-driven remote control centre for the Bush-Timac X-10 system. This control centre, referred to as "Niles", is capable of controlling five functions for sixteen devices, completely speech driven. It can be operated from 0.2 to 8 m and reaches a high (>97%) recognition rate with an experienced speaker (tested in real-life environment). The X-10 devices are controlled by transmitting an infrared code to a receiver that modulates these codes onto the mains net. Several analogue signal processing techniques are applied to reduce the effect of acoustic phenomena when operating Niles from larger distances: Automatic Gain Control to adjust the input sensivity and Spectral Shaping to correct for the proximity effect. Furthermore, some directives on placement of Niles are given. In the conclusions and recommendations chapter, some ideas are presented to optimise the system and add some extra features.
-58-
Leerstoel Elektromechanica & Vermogenselektronica
-59-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.B. Dill 23 april1998 Het koppel van magnetische micromotoren met koperen en ijzeren statorlichamen . ir. P.A.F.M. Goemans, dr.ir. F.N. van de Vosse prof.dr.ir. E.M.H. Kamerbeek
Summary:
Een micromotor met een permanent-magnetische rotor en een koperen stator met een diameter van 0.7 mm en een lengte van 2.3 mm, Ievert een koppel in de grootte orde van 10-7 Nm. Door de koperen stator te omringen door een ijzeren cilinder wordt magnetische afscherming buiten de motor verkregen en wordt het veld plaatselijk binnen de motor versterkt zodat het koppel toeneemt. Eveneens is het mogelijk een geheel ijzeren stator te gebruiken. Het afstudeerwerk beoogt het koppel van de micromotor met ijzeren stator te bepalen d.m.v. berekening en simulatie, waarbij tevens gekeken wordt naar twee typen micromotoren met koperen stator. Tevens wordt onderzoek verricht aan een vloeistofrem waarmee nauwkeurig het koppel van de micromotor bepaald kan worden. Een vloestofrem bestaat uit een cilinder roterend in een viskeuze vloeistof. Via metingen aan een relatief grate vloeistofrem m.b.v. een koppelmeetinrichting uit de horloge-industrie blijkt dat het koppel van een vloeistofrem door simulatie met een nauwkeurigheid van 3% voorspeld kan worden. Door de meetresultaten in verband te brengen met de simulatie-resultaten is voor een micro-vloeistofrem een koppelformule opgesteld die het koppel (0.01 tot 1 11Nm) geeft als functie van de rotatiefrequentie (0 tot 100Hz) en van de viscositeit van de vloeistof. Met behulp van deze formule kan het koppel van een micromotor bepaald worden met een onnauwkeurigheid die kleiner is dan 5%. Op analytische wijze wordt m.b.v. een 2D model het koppel bepaald van de motoren met koperen stator. De micromotor met ijzeren stator is niet door een 2D-model te beschrijven, omdat de magnetische flux in de stator oak in axiale richting verloopt. Daarom wordt gebruik gemaakt van een 3D-eindige elementen simulatie. Het koppel wordt bepaald door gebruik te maken van de Maxwellse spanning op een gesloten oppervlak om de rotor. Dit oppervlak bevindt zich in een sterk inhomogeen veld zodat een zeer nauwkeurige 3D-berekening praktisch uitgesloten is. Tevens kan het koppel berekend worden door het koppel op elke magnetische dipool (llomx.!:::!.) in de permanente magneet te integreren over het volume van de magneet. Door de resultaten van beide methoden onder verschillende omstandigheden te vergelijken blijken de aldus berekende koppels zo'n 10% te kunnen verschillen. Uit simulatie blijkt dat het koppel redelijk behouden blijft bij verzadigd ijzer. De koppelberekeningen zijn echter te onnauwkeurig om hier op betrouwbare wijze de invloed van de verzadiging te kunnen bepalen. Er kan worden gesteld dat zowel bij gebruik van een ijzeren stator als bij toepassing van een ijzeren cilinder om de koperen stator het koppel toeneemt. Voor voldoende magnetische afscherming en een gelijk koppel bij gelijke stroom, blijkt een grotere diameter bij de ijzeren stator nodig te zijn dan bij een ijzeren cilinder die de koperen stator omringt. Wanneer geen hoge eisen gesteld worden aan de magnetische afscherming en de drie typen motoren met gelijke buitendiameter en statorstroom worden vergeleken, Ievert een motor met ijzeren stator een koppeltoename van ongeveer 50% op t.o.v. de motor met geheel koperen stator; terwijl de motor met ijzeren cilinder een koppelverdubbeling geeft.
-60-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleid ing: Afstudeerhoogleraar:
C.J.A. Schetters Rapportnr.: EMV 98-09 15 oktober 1998 A Miniaturized 5.2 Watt Battery Charger. ir. P.J.M. Smidt (Philips EPM) prof.ir. J. Rozenboom
Summary: In my thesis a complete theoretical analysis and practical design of a miniaturized 20 cc, 5.2 Watt battery charger is given. A standard flyback topology is chosen, theoretically analyzed and implemented. The need to reduce the number of components resulted in an integrated flyback solution, the VIPer20 of SGS-Thomson. The electrical losses in the converter are identified, calculated and succesfully verified by measurement. The feedback loop is theoretically analyzed using the state space averaging method, which results in a dynamic small signal (ASC) and steady state (DC) model of the converter operating in current and voltage regulation mode. From these models the Bode plots are calculated and successfully verified by a closed loop gain measurement. The electrical volume of the prototype battery charger is 25 cc due tu the chosen tranformer. When using flying leads 20 cc is possible.
-61-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.H. van der Voort Rapportnr.: EMV 98-03 23 april1998 Investigation of simple converters for compact fluorescentlamps. dr. J.L. Duarte prof.ir. J. Rozenboom
Samenvatting: Gasontladingslampen hebben van nature een zogenaamde negatieve stroom spanninskarakteristiek. Dit betekend dat de lamp niet rechtstreeks op een spanningsbron kan worden aangesloten. Een serie impedantie meet er zorg voor dragen dat de lampstroom begrensd wordt. Tegenwoordig worden hiervoor converters gebruikt met hoge schakelfrequenties. In vrijwel aile converters wordt een ballast spoel toegepast om de lampstroom te begrenzen. Er bestaat een constante behoefte aan nieuwe, kleine, eenvoudige en goedkope converters. Vanuit dit oogpunt is een converter zonder ballast spoel wenselijk. Een dergelijk circuit is ontworpen en geanalyseerd. Het circuit bestaat uit een DC spanningsbron, een schakelaar en de lamp. Uit wiskundige analyse en simulaties is gebleken dat wanneer de lamp wordt toegepast in een open systeem en de schakelaar met een vaste frequentie en duty cycle gestuurd wordt, er geen stabiel lampbedrijf mogelijk is. Als de lamp wordt toegepast in een gesloten systeem waarbij aan de schakeling nog een controller wordt toegevoegd en de controller afhankelijk van de lampstroom de schakelaar open en dicht stuurt, is stabiel lampbedrijf mogelijk. Het controller concept is gebaseerd op het begrenzen van de momentane maximale lampstroom op een vooraf ingestelde waarde. Er is een praktisch circuit gerealiseerd waarmee verschillende fluorescentie lampen stabiel kunnen branden. Ook zijn er met behulp van het circuit metingen vericht en lamp eigenschappen in kaart gebracht. Er is gekeken naar de invloed van lampparameters als vuldruk, vulgas, buislengte en buisdiameter en naar de invloed van circuitparameters als maximale lampstroom, DC bronspanning en schakelfrequentie op het elektrische gedrag van de lamp. Het is gebleken dat het lampvermogen en hiermee de Iicht output vermeerderd kan worden indien het edelgas een hager percentage neon bevat, de buislengte grater wordt, de buisdiameter kleiner wordt, de maximale lampstroom toeneemt, de DC bronspanning afneemt en de schakelfrequentie toeneemt. De invloed van veranderingen van de valdruk is gering. Verder kan de luminous efficacy (hoeveelheid lumen per Watt) vermeerderd worden als de maximale lampstroom afneemt, de DC bronspanning toeneemt en de schakelfrequentie toeneemt. Typische waarden zijn voor de Iicht output 900 lumen en voor de luminous efficacy ongeveer 60 lumen per Watt. De invloed van de lampparameters op de efficacy is niet onderzocht.
-62-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleid ing: Afstudeerhoogleraar:
J.W.H. Kuppen 23 apri11998 Een Unified Power Flow Controller als blindstroomcompensator. dr. J.L. Duarte prof.dr.ir. A.J.A. Vandenput
Sam en vatting:
Een Unified Power-Flow Controller {UPFC) kan gebruikt worden voor het momentaan regelen en het dynamisch compenseren van een wisselspanning transmissie systeem. Een UPFC bestaat uit twee Voltage Source Inverters {VSI) die worden gevoed door een gemeenschappelijke gelijkspanningsbron. Een VSI is in serie geschakeld met het net, terwijl de andere VSI parallel aan het net is geschakeld. Hier is aileen gekeken naar de VSI, die via spoelen parallel aan het net is geschakeld. Deze VSI wordt gebruikt om lijnstromen in het net te injecteren om de hogere harmonische stromen in het net op te heffen, die worden veroorzaakt door niet-lineaire belastingen. De gebruikte niet-lineiare belasting is een 20 kVA drie-fasen thyristor brug gelijkrichter. De stromen die door de VSI in het net ge"injecteerd worden, zorgen ervoor dat de netstroom aileen nag uit een fundamentele component bestaat, die in fase is met de fase-spanning. Daarom genereert de VSI oak het momentaan blindvermogen. De lijnstromen die in het het ge"injecteerd worden, worden geregeld door de spanning over de spoelen te regelen, die het net en de VSI met elkaar verbinden. Daarom moet aan de uitgangsklemmen van de VSI iedere gewenste spanning gegenereerd kunnen worden. Om dit de realiseren, worden twee Pulse Width Modulation (PWM) methodes, Space Vector Modulation (SVM) en Symmetrical Dead-Band Modulation {SDBM) behandeld. Seide methodes maken gebruik van een ruimtevector. Om de lijnstromen te regelen worden de referentie signalen berekend, met behulp van de momentane actieve - en reactieve vermogenstheorie, die gebruik maakt van een roterend assenstelsel. Een regelalgorigme wordt gebruikt om de referentie stroomvector te kunnen volgen en om de wenswaarden van de lijnstromen te transformeren naar een gewenste spanningsvector voor de VSI. Er zijn simulaties en experimenten gedaan om de theoretische analyses te valideren. Het was mogelijk om de harmonische stromen tot aan de 1000 Hz met succes te onderdrukken. Verder bleek SDBM een goede modulatietechniek te zijn, wanneer de modulatieindex net beneden zijn maximum waarde opereert. De referentie stroomvector wordt goed bepaald met behulp van de momentane vermogenstheorie, die gebruik maakt van een roterend assenstelsel.
-63-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
E.A.J. van de Poel Rapportnr.: EMV 98-07 11juni1998 DSP gestuurde 16 kW power module voor een servobesturing dr. J.L. Duarte, ir. C.G.E. Wijnands (Prodrive B.V., Son) prof.dr.ir. A.J.A. Vandenput
Samenvatting: Recente ontwikkelingen op het gebied van energievoorzieningen en elektrische aandrijvingen eisen snelle inverters met rekenkrachtige digitale regelaars. Dergelijke inverters zijn onder andere nodig voor de flux-georienteerde gelijkrichter die in een kop-staart schakeling met een AC-motoraandrijving de remenergie aan het publieke drie fasen net terug kan leveren. Daarbij zijn de gevraagde of geleverde netstromen sinusoidaal. Onlangs is een industrieel prototype van de flux-georienteerde gelijkrichter verwezenlijkt. Om een verdere ontwikkeling in de kop-taart schakeling mogelijk te maken, wordt een compacte snelle invertermodule ontwikkeld die een digitaal signaal procesor gebruikt en geschikt is voor zowel de implementatie van een flux-georienteerde gelijkrichter als voor een ACmotoraandrijving. Als eerste beproeving van de nieuwe invertermodule dient een hoogdynamische AC-motoraandrijving voor een servo te worden gerealiseerd. Eerst dient de gehele theoretische beschouwing doorgrond te worden om een jiste aandrijving mogelijk te maken. De gebruikte theorie staat in nauw verband met die van de flux-georienteerde gelijkrichter omdat de servo een drie fasen permanent-magneet elektronisch gecommuteerde machine is met sinusvormig verdeelde wikkelingen zodat de fasestromen sinusoidaal moeten zijn. De belangrijkste zaken voor een hoogdynamische ACmotoraandrijving zijn de wijze van correctie van de faseverschuiving in de stroom, de kwaliteit van de stroomvector-regelaar en het toe kunnen passen van "Field control"; het regelen van het toerental met de blindstroom indien de opgelegde spanning niet meer verhoogd kan worden. De feitelijke regeling naar een wenswaarde in positie of snelheid geschiedt met een cascade-regelaar. Een snelle stroomlus regelt het koppel van de machine. Een tragere Ius regelt de hoeksnelheid en een nog tragere Ius regelt de rotorhoek. De ontwikkelde invertermodule maakt het mogelijkde specifieke aandrijftechniek en de regellussen geheel in software te implementeren. In praktijk blijken de benodigde wiskundige formules uit de toegepaste theorie door diverse beperkingen van de gebruikte digitaal signaal processor op de invertermodule lastig te implementeren. Schaling en resolutie van getallen dient in acht genomen te worden. De realisatie van de complete aandrijving is echter door de vele besturingsmogelijkheden van de DSP toch relatief snel en volledig mogelijk. De kwaliteit van de AC-motoraandrijving is voornamelijk bepaald door de kwaliteit van de stroomlus. Een verdere beschouwing en optimalisatie van de ingewikkelde stroomlus is gerealiseerd waarbij compensatie van de varierende open-lusversterking en de in- en uitschakeling van "Field control" nader toegelicht worden. Naast deze faktoren speelt de correcte werking van de stroomregeling in vier kwadranten een belangrijke rol om in een later stadium met de kop-staart schakeling remenergie aan het drie fasen net terug te kunnen leveren. Overigens is de vier kwadranten regeling ook noodzakelijk om goede responsies van de servobesturing zonder flux-georienteerde gelijkrichter mogelijk te maken. Middels diverse metingen van stapresponsies en responsies bij veel eisende referentiesnelheden, wordt door optimalisatie van de software een hoogdynamisch servosysteem verkregen dat ook gecontroleerd energie kan terugleveren bij afremming. Daarbij is een compromis gemaakt tussen beschikbare rekentijd, sinusvormigheid van de fasestromen en snelheid van de regeling. Er wordt een goede performance gehaald voor specifieke bedrijfsituaties. Enkele aspecten van de hardware en de geimplementeerde regeling dienen voor andere bedrijfsituaties aangepast te worden om de performance verder te verbeteren.
-64-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.M. Risseeuw 15 oktober 1998 Ontwerp van een hysteresis regelaar. dr.ing. F. Blaschke prof.dr.ir. A.J.A. Vandenput
Rapportnr.: EMV 98-08
Summary: In industrial electrical drive systems three-phase asynchronous machines are often used, because they are cheap and robust. To save power in certain applications, it is desirable to change speed, which can be controlled by the frequency of the applied magnetic fields. As a consequence, the speed can be changed by use of an inverter which generates a three-phase supply current, with a variable frequency. The operation of the inverter is based on a kind of pulse width modulation. For high speed applications (10% to 100% of rated machine speed), commercial inverters are available which operated without use of rotor position encoders. For driving pumps and fans these inverters are suitable. To operate accurately at lower speed, these inverters need rotor position encoders. Disadvantage of the use of rotor position encoders is that they are quite expensive and they make the drive system less robust. At the Eindhoven University of Technology, research has been done on sensorless direct field orientation at zero flux frequency, i.e. controllers which use the measured three stator currents and voltages as feedback only. For the specific application a hysteresis controlled inverter with a stable output current, and low harmonic distortion is necessary. For this reason, a hysteresis current controller which doesn't use the zero-voltage vector, has been developed. For this purpose a special switching algorithm has been derived. The algorithm uses the six possible voltage vectors equally and always keeps the sequence of switching the same. A big advantage is that the output current shows a very stable, predictable and regular behaviour at low speed. This results then in low harmonic distortion at low speed. At higher speed the hysteresis current controller still shows stable operation, with low harmonic distortion. Estimation of machine parameters is not required. Zero-voltage vectors are often used to reduce the inverter switching frequency at low speed, because they give a lower current slope, resulting in a longer switching period. Thus avoiding the use of zerovoltage vectors yields an increase of the switching frequency at low speeds. To reduce the switching frequency to an acceptable value, several measures can be taken: Reducing the DC-Iink voltage of the inverter at low speed, increasing the hysteresis band width or connecting inductors between the inverter and machine. The first solution is quite expensive, as a second converter is necessary to get a variable DC-Iink voltage. The second solution can easily be implemented, but gives a large ripple current at low speed. Nevertheless a larger ripple current doen't mean larger harmonic distortion in the frequency band of interest, but gives higher machine losses. A combination of a variable DC-Iink voltage and variable hysteresis band width gives good results, to keep the switching frequency and ripple current limited. The effect of connecting inductors between inverter and machine hasn't been investigated.
-65-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
B. Williams Rapportnr.: EMV 98-05 23 april1998 DSP gestuurde convertor voor een permanent-magneet motor/generator. dr.ir. L.J.J. Offringa, dr. J.L. Duarte prof.dr.ir. A.J.A. Vandenput
Samenvatting: Binnen de sectie Elektromechanica en Vermogenselektronica van de TU Eindhoven is een sneldraaiende permanent-magneet generator ontwikkeld voor directe koppeling aan een gasturbine. Het idee is ontstaan om deze generator te gebruiken in een elektricitietsopwekeenheid op basis van een tweetakt-dieselmotor. Aan de tweetakt-diesel is een gewone synchrone generator gekoppeld (de hoofdgenerator). De permantent-magneet machine wordt nu direct gekoppeld met een compressor/turbine-unit die v66r de dieselmotor is geschakeld. Tijdens het opstarten van het systeem zal de machine deze unit moeten aandrijven. Als het systeem op toeren is, zal de machine energie aan het net leveren. Dit is eenvoudig te realiseren met een stroomconvertor. Deze convertor koppelt de machine via een stroomtussenkring met het 50 Hz net. De netvervuiling die ontstaat als gevolg van de blokvormige stromen, vormt hier geen beperking omdat de convertor parallel aan de hoofdgenerator staat (het vermogen van deze generator is ca. 20 maal grater dan dat van de permanent-magneet machine). De convertor is opgebouwd met GTO's en wordt optisch aangestuurd door een digitaal systeem dat als hart een Digitale Signaal Processor heeft (een TMS320C30 van Texas Instruments). De faseverschuiving tussen tegen-EMK en fasestroom van de machine kan ingesteld worden. Met deze faseverschuiving en de spanning in de tussenkring kan een willekeurige bedrijfstoestand van de machine gerealiseerd worden. In motorbedrijf kan hiermee tevens veldverzwakking toegepast worden bij de permanent-magneet machine. Om de faseverschuiving te realiseren wordt de rotorpositie ingelezen met behulp van een positieopnemer op de as van de machine. De timing van de ontsteekmomenten van de GTO's wordt berekend door de DSP, maar uitgevoerd door een logische schakeling. Op deze manier worden ongewenste variaties in de ontsteekmomenten geminimaliseerd. De commutatie van de stroom in de convertor is gesimuleerd op een PC. Hierbij is met name gelet op de invloed van de snubbers van die GTO's, die niet mee doen aan het commutatieproces. Een commutatiehulpcircuit zorgt ervoor dat aanlopen met hoge stroom mogelijk is. Het commutatiehulpcircuit bestaat uit een diode-gelijkrichter en een spanningsbron parallel aan de machine. Er zijn metingen gedaan aan het ontwikkelde digitale systeem en aan de convertor zonder commutatiehulpcircuit. Het systeem werkt betrouwbaar en de faseverschuiving tussen tegen-EMK en fasestroom kan van de machine nauwkeurig ingesteld worden.
-66-
CAPACITEITSGROEP INFORMATIE & COMMUNICATIESYSTEMEN
-67-
Leerstoel Digitale lnformatiesystemen
-69-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
ing. G.J.L. Netten 12 februari 1998 Nieuwe ontwikkelingen Internet technieken ir. M.J.M. van Weert prof.ir. F. van den Dool
Rapportnr.:ICS EB 682 IP Switching en gerelateerde
Sa men vatting Het te verrichten onderzoek bestond uit twee onderdelen, die na elkaar moesten worden uitgevoerd. Het eerste onderdeel was het maken van een overzicht van de (nieuwe) ontwikkelingen op het gebied van Internet. De ontwikkelingen lopen uiteen van netwerk-technieken tot applicatieprogramma's. Voor het tweede onderdeel, het feitelijke onderzoek, moest er een keuze gemaakt worden uit het overzicht dat met behulp van het eerste onderdeel van het onderzoek was verkregen. Na overleg met afstudeerhoogleraar en begeleider, moest de nieuwe ontwikkeling "IP Switching" worden onderzocht. Waarom "IP Switching"? In IP netwerken vormen de conventionele routers de bottleneck in het doorschakelen van pakketten. IP Switching en de gerelateerde technieken zorgen ervoor, dat een router niet meer de bottleneck is in IP netwerken. Hoe? Door het bepalen van de route in de routetabel van een router te versnellen, waarbij de routerarchitectuur (onder andere de doorvoersnelheid van de backplane) aangepast moet worden om profijt te hebben van het sneller opzoeken in de routetabel. Of door zoveel mogelijk routers te vermijden, of een combinatie van beide oplossingen. Op basis van de gevolgde strategieen kunnen de technieken ondergebracht worden in een aantal hoofdcategorieen, in het afstudeerverslag zijn dater zes. Per hoofdcategorie is nagegaan, wat zijn beste toepassingsgebied is. Hieruit volgt voor de onderstaande hoofdcategorieen: • Multigigabit routers vinden hun beste toepassing in de toegangswegen tot Internet backbones en in Internet Service Provider netwerken. • Peer-to-peer multilayer mapping oplossingen vinden hun beste toepassing in grootschalige IP netwerken, waar het verkeer gemiddeld veel routers moet passeren, voordat het op zijn eindbestemming aankomt. • Server-gebaseerde oplossingen: idem als peer-to-peer multilayer mapping oplossingen. • Multilayer switching producten - zonder IP autolearning vinden ze hun toepassing in de (collapsed) backbones van MAC (LAN) netwerken, waar het probleem met conventionele routers ligt in het forwarden van inter-subnet en/of het inter-VLAN verkeer. -met IP autolearning vinden ze hun toepassing v66r een collapsed backbone router, die de interconnectie tussen subnetten en VLAN's verzorgd. • IP/MAC address learning oplossingen vinden hun beste toepassing in (Ethernet) MAC (LAN) netwerken, waarbij de core van het netwerk uit LANswitches bestaat en waar het inter-VLAN verkeer gemiddeld veel routers moet passeren. • Overige: - FastiP en PoweriP: idem alsIP/MAC address learning oplossingen. Netflow Switching: idem als Multilayer switching produkten zonder IP autolearning.
-70-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.G.M. Pluijmaekers Rapport nr: ICS-EB 704 10 december 1998 Management of optical transport networks based on Wavelength Division Multiplexing. dr. H.J.S. Darren (TUE), ir. M.J. Blange (KPN Research) prof.ir. F. van den Dool
Samenvatting Het afstudeeronderzoek heeft zich gericht op het beheer van WDM transportnetwerken. In deell van het verslag is daarvoor een functioneel model opgesteld dat de transportfuncties in dergelijke netwerken op een geslaagde manier kan beschrijven. Daarnaast zijn er voor optische netwerkelementen statussignalen vastgesteld die de toestand van deze netwerkelementen vastleggen en aangeven hoe deze gebruikt kunnen worden ten behoeve van het beheren van het optische transportnetwerk. Dit heeft geleid tot het vaststellen van beheerinformatiestromen in het netwerk, enerzijds tussen de netwerkelementen onderling en anderzijds tussen de netwerkelementen en het centrale beheersysteem. Vervolgens is beschreven hoe op het centrale beheersysteem op flexibele wijze optische netwerken gedefinieerd kunnen worden, waaronder het defini~ren van de optische netwerkelementen en verbindingen die over deze netwerkelementen lopen. Tenslotte is de relatie aangegeven tussen het beheersysteem van een optisch transportnetwerk en de beheersystemen van andere hogere transportnetwerken als SOH, PDH, ATM en IP netwerken. In deel II van het verslag is de theorie van deel I in de praktijk toegepast om een beheersysteem te ontwerpen voor een optisch demonstratienetwerk in het kader van het onderzoeksproject BOLERO (f!eheer Qptisch 1aag £xpeB.imenteel Onderzoek) op het Dr. Neher Laboratorium van KPN Research te Leidschendam. De specifieke statussignalen zijn vastgesteld voor de apparatuur die gebruikt wordt in het BOLERO netwerk en het functionele model dat in deel I gepresenteerd is, is gebruikt om deze statussignalen te structureren. Daarnaast is aangegeven hoe in het BOLERO netwerk de informatiestromen ten behoeve van het beheer gerealiseerd kunnen worden en hoe het centrale beheersysteem ingericht moet worden. Verder is apart aandacht besteed aan protectieswitching in het BOLERO netwerk, aangezien dit van groot belang was voor de betrouwbaarheid van het netwerk.
Naam kandidaat: R.W.P. Smeets Rapport nr: ICS-EB 699 15 oktober 1998 Afstudeerdatum: The design of a cell-based dual band network traffic model Afstudeerprojekt: ir. M. Lebouille (Libertel) Begeleiding: Afstudeerhoogleraar: prof.ir. F. van den Dool ·
Summary: Libertel has decided to implement a dual band network because of the current growth in capacity demand in the Libertel GSM network and the limited number of GSM 900 frequencies. Since Libertel cannot use the GSM 1800 frequencies until the beginning of 2000, there is time to develop a costeffective and efficient dual band network deployment strategy. The scope of this Master's thesis project has been the design of a cell-based dual band network model. This model describes the relation between the capacity of an integrated GSM 900/1800 cell, the total amount of offered GSM 900 and dual band traffic and a predefined Grade of Service parameter. The first stage of the development process has been focused on gathering and processing knowledge about teletraffic engineering in the field of mobile cellular communications. The inquiry of several bibliographic databases has resulted in a general model of a mobile cellular network, identifying the theoretical assumptions most frequently made in literature. Combining the general model description with the basic theory of teletraffic engineering (e.g. the Erlang-8 formula) has lead to a definition of a conceptual cell model. The conceptual cell model describes a hierarchical network structure, consisting of a micro-cell and an overlaying macro-cell with unequal geographical coverage areas. Realising that a dual band network can be considered as a hierarchical network meant a breakthrough in the overall development process. The theoretical dual band network model describes the situation of a GSM 1800 cell with an underlying GSM 900 cell. Dual band terminals are allocated to the GSM 1800 system, whereas the concept of guard channels results in the prioritised input source only. Under the assumption of Poisson distributed intput sources (i.e. GSM 900/dualband calls) this dual band network can be modelled by various Markovprocesses with the corresponding sets of birth-death equations. The implementation of the theoretical description in Matlab has been focused on the determination of the maximum amount of processible traffic as function of the dual band handset penetration. This relation has been identified for various dual band network configurations and different values of the Grade of Service parameter. Execution of Matlab procedures has shown that the maximum processible traffic source increases for increasing values of the dual band handset penetration. Additionally, relating the maximum processible traffic for each different value of the dual band handset penetration to the overall maximum traffic value (i.e. for the situation of 100% dual band terminals) has shown that a minimum dual band handset penetration of approximately 40% is required for efficient deployment of the dual band network hardware. Various performance calculations have demonstrated that this target value is independent of the predefined Grade of Service. The impact of the hand-down mechanism is marginal positive and should be taken into account for future dual band network dimensioning questions. On the other hand, the implementation of guard channels is critical, due to the negative effect for relative large values of the dual band handset penetration. The developed dual band network model has been validated by means of evaluating network measurements. Detailed analysis of the measurements has shown that the assumption of modelling the source of subscribers as a Poisson input source is quite reasonable. This is a very strong backing for the model used, and therefore for the results obtained in the graduation report by using the models as well.
-72-
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt:
AS. Wisse Rapport nr: ICS-EB 689 11juni1998 ATM Netwerkbeheer; een analyse van het huidige beheer en het toekomstige beheer op korte termijn lng. S. Tolsma, ir. B. Rijsdijk (PTT Telecom, Amsterdam) Begeleiding: Afstudeerhoogleraar: prof.ir. F. van den Dool
Samenvatting: De afstudeeropdracht is in twee delen verdeeld; het eerste deel is een inventarisatie en analyse van het huidige beheer en het toekomstige beheer op termijn van 1 a 1,5 jaar. Het tweede deel zal een van de verbeterpunten nader onderzoeken. Doelstelling is het verbeteren van het beheer van het ATM netwerk. Door de onderzoeksvraag te beantwoorden: Welke mogelijke problemen kunnen er ontstaan tijdens de korte termijn periode (1 1,5 jaar) en welke maatregelen zijn er tegen te nemen?
a
Uitgangspunt van dit onderzoek is het al of niet aanwezig zijn van functionaliteit in operationeel netwerkbeheer (ATM services). Het belang van bepaalde functionaliteit kan per afdeling of organisatie verschillend zijn. Functies die belangrijk zijn voor operationeel netwerkbeheer, bijvoorbeeld het automatisch kunnen opzetten van verbindingen, zijn van minder belang voor marketing en verkoop, die bijvoorbeeld rapportage van geleverde diensten aan de klant belangrijker vindt. Gekozen perspectief is operationeel netwerkbeheer: ATM services. Vanuit het uitgangspunt ATM services is de dienst een ATM verbinding. Deze dienst is voor de afnemen slechts een element in het netwerk. In de meeste gevallen wordt ATM gebruikt voor het koppelen van netwerken. Zo ook zijn de netwerkverbindingen waaruit het ATM netwerk is opgebouwd (SOH en PDH) vanuit het gehanteerde perspectief, elementen in het netwerk. Referentiekader is het TMN model; dit geeft een duidelijke structuur wat betreft netwerkbeheer om tijdens de inventarisatie en analyse als kapstok te gebruiken.
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
L.D.C. van Helvoort Rapport nr: ICS-EB 679 12 februari 1998 Chipcards in Intelligent Network Services ir. J. van der Meer (Ericsson Rijen) prof.ir. J. de Stigter
Summary: The Intelligent Network (IN) concept is more and more implemented in telecommunication networks all over the world. The standardization process for the IN has not finished yet. The IN concept is continuously enriched with new services and solutions. This continuing standardization ensures adaptation to recent requirements of the market and utilization of the newest technologies. A new technology which might be interesting for IN is the chipcard technology. Chipcards tend to become very popular in all kinds of applications. The capability to store information on such a portable medium in a secure way, combined with the on-the-card processing power, especially useful for cryptographic calculations, provides a device which offers many possibilities. The use of chipcards in IN services can be twofold. Firstly, all kind of user and service information can be stored on a chipcard. User information can be used for identification purposes. Service information on the card enables management and portability of service profiles, for instance to other operator's networks. By means of a user interface the contents of chipcards can be read, edited and stored. Secondly, thanks to their cryptographic capabilities, chipcards offer interesting security solutions. Besides secure transmission of information also reliable authentification of all parties involved is possible. Many of the IN services that have been standardized up till now, can be made more secure, more userfriendly, and portable by means of a chipcard. To get some experience with chipcards in general and with the use of chipcards in IN services, a prototyping evironment has been developed. The environment consists of a PC with a user interface, connected to a chipcard terminal and a modem, and IN functionality implemented on a Unix platform. To enable communication between the user interface and the IN, several new Service Independent Building Blocks (SIBs) and service scripts have been developed and new communication messages have been introduced. In this environment the following IN/chipcard (IN/CC) services have been implemented: Abbreviated Dialing service Account Card Calling service Universal Personal Telecommunications service The first service makes use of the storage capability, the ACC and UPT service make use of the security capabilities of chipcards. The developed environment forms a good platform for implementing and testing new IN/CC services. Developing prototypes helps the developer understand what is necessary to implement new services, what problems can be encountered, and how users experience the services.
-74-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
T.T. Kijftenbelt Rapport nr: ICS EB 678 12 februari 1998 Development of a Multimedia Service on a TINA Platform dipl.lng. L. Lehmann (Swisscom Corporate Technology) dipl.lng. C. E. Worgler (Swisscom Corporate Technology)
Afstudeerhoogleraar:
prof.ir. J. de Stigter
Summary: In today's open telecommunication market, rapid service development is of the utmost importance. To do so, the reuse of software components is inevitable. Furthermore, also third party service providers have to be able to develop services. To meet these requirements a new telecommunications architecture has been defined: TINA Together with Alcatel and Ericsson, the Unisource partners have developed a platform compliant to the TINA architecture in order to demonstrate the feasibility of TINA On top of this platform several high quality multimedia services have been developed. The graduation report describes a development of the DeskTop Presentation Service. Besides rapid service development and the integration on the platform, the aim of developing a DeskTop Presentation service is to show the possibilities of encapsulation of existing software, and study the application of Internet technologies in TINA services. The project was divided into two phases. During the first phase a DeskTop Presentation service was integrated into a Desktop Video Conference which was developed earlier and which takes care of the session management. This was done, as a first version of the DeskTop Presentation service had to be demonstrated to the potential customers in an early stage. In the second phase of the project the DeskTop Presentation service was implemented as a stand-alone service, which was necessary to show that Tina allows for rapid service development and to show that several services can be integrated on the same platform. Based on the experience, gained during the project, it can be concluded that TINA allows for rapid service development, and that the TINA platform makes it possible for third party service providers to develop services and to integrate these on the TINA platform. Encapsulation of existing applications also allows for rapid service development, but its possibilities depend on the APis of the existing application.
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Begeleiding:
M.H.M. Lutz Rapport nr: ICS-EB 692 27 augustus 1998 Parallel processes for telecommunication with Java. Implementing new services in an IN environment. Prof.ir. J. de Stigter ir. J. van der Meer (Ericsson Rijen)
Summarv:
The Master's Thesis shows how Java can enrich the telecommunication environment. Java has been coupled to the Intelligent Networks environment for this reason, through the AXE-VM. The Ericsson's AXE-10 switching machine can be deployed for Intelligent Networks (IN). IN puts the signalling and the logic in separate nodes. The logic is put in a central place while the switching remains in the switching nodes. New services can be integrated at a central place in the network. A virtual machine of this AXE-1 0 has been built in a Unix environment. It enables code written in C++ and Plex (language of the AXE-10) to cooperate with each other. The AXE-VM platform has been built in order to reduce the implementation time of new services. It builds upon the growing role of IT in telecommunications. A new computer language that might be interesting in telecommunications and IN in particular, is the Java language. The way the strengths of Java can be used in telecommunications is discussed in the graduation report. The strengths of Java are: 1. 2. 3.
4.
Run time environment Event handling Multi-threading Garbage collected heap
Special attention is also paid to some database functions and provided networking facilities. In order to integrate Java in the existing telecommunication environment, a communication mechanism between the AXE-VM and Java-VM has been initialised. Different communication mechanisms have been examined. The socket interface was found to be best suitable. It was implemented with a protocol to enable transparent data communication at both sides. Different ways to integrate Java into the existing environment have been examined. The choice was made to integrate Java into the IN network. A special SIB (Service Independent Building Block) has been developed for this purpose. A set of paramaters can be passed to a Java process and a changed set returns to the IN network. A data module coupled to the SIB determines which service must be executed and what parameters are needed. At both the IN, and the Java side a standard service creation environment for services written in Java has been designed. At the IN side new services are written as Data Modules. These Data Modules are coupled to the SIB that integrates Java into the IN network. To implement new services at the Java side, a set of standard design rules are provided to the service designer. Standard methods must be submitted in a service dependent way. While installing, removing or upgrading services in the Java environment, both the Java and the AXE-VM can stay in active mode. At the Java side an operation and handling environment has been developed to execute services and error handling. According to the design rules, a service will be created and integrated into the environment. In this study the created environment is tested for correctness, robustness and performance by implementing the televoting service at the Java back end.
-76-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
T.T.H. Dennemans Rapport nr: ICS EB 676 12 februari 1998 Specification and Design of a Distributed Real-Time Reactive Control System, using the method Software/Hardware Engineering dr.ing. P.H.A. van der Putten dr.ir. J.P.M. Voeten prof.ir. M.P.J. Stevens
Summary: The graduation thesis describes a contribution to the specification and design of a next generation mailing machine for BUHRS Zaandam. The partners in this project are BUHRS, TNO-TPD and Eindhoven University of Technology. The project has been granted a subsidy of the Ministry of Economic Affairs for the innovative use of formal methods for the specification and design of industrial controllers. The method used for specification and design is SHE (Software/Hardware Engineering) and is developed at the section of Information and Communication Systems (ICS) of the Eindhoven University of Technology [Pu+97}. SHE has four phases: Initial Requirements Description, Essential specification, Extended specification and Implementation phase. The essential specification of the next generation mailing machine is described in the thesis of Emil Reuter [REU97]. The graduation thesis mentioned above describes the addition of the observation scenario to the essential model of the essential specification. The observation scenario is part of the maintenance, test and repair scenario and takes care of the observability of the variables of the processes. So far, only most important scenarios are specified in the essential model. During the extended specification the essential model must be extended, optimised and made ready for implementation. This means creating a model very close to the real world. Implementation details like CAN communication, 1/0 communication and specific signal generation are added to the essential model into a new extended model. Optimisation of the extended specification is not completed to speed up the step to the implementation. The extended model should be updated and optimised later for the finai release of the implemented software. Implementation is started before completion of the extended phase. This was needed to explore whether the new concepts of control work on the prototype mailing machine. When the concepts work, new features must be added. The essential and extended models must be updated with additional scenarios before the implementation can follow. Also the optimisation should be done. To implement the extended model, a POOSL C-Library is created to support a subset of POOSL statements from the POOSL language to be used in an ANSI C-environment. This library will run in a multitasking environment on WIN32 (WindowsNT and Windows 95) or on an 8051 using the CMX-RTX real-time kernel. Some restrictions apply concerning the way POOSL is used. The extended models are manually converted to C, which could be automated in the future. The control software is distributed over a collection of intelligent boards. The boards being used (8051processor with CAN interface and various 1/0) have a limited amount of code memory. To focus the research on the conceptional solution instead of optimising the size of the code, PCs with WindowsNT are used. The processor boards are used for simple remote 1/0. Each station of the mailing machine gets its own PC and remote 1/0 module. the software is ready for testing. It will be tested on the prototype mailing machine. The results of these tests will be included in the thesis of a successor.
-77-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
E.J.L. Limpens Rapport nr: ICS EB 690 15 oktober 1998 Nummerbord-identificatie-systeem; Ontwerp en implementatie van een compleet nummerbord-herkenningssysteem. ir. E.P.M. Bakker, ir. J.P.C.F.H. Smeets (EIIips B.V. Eindhoven) prof.ir. M.P.J. Stevens
Samenvatting: Het afstudeerverslag behandelt het antwerp en de implementatie van een nummerbord-herkenningssysteem. Een dergelijk systeem vindt vooral toepassing bij het automatisch bekeuren van snelheidsovertreders en automatische tolheffingssystemen. De opstelling bestaat uit een fulframe camera met infrarood-flitser en een radardetector gekoppeld aan een computer via de RIO-framegrabber. De camera, framegrabber, radardetector en de computer zijn al bestaande producten; de flitster is speciaal voor deze applicatie ontworpen. In eerste instantie is een flitser ontworpen die snel gebouwd kon worden zodat zo snel mogelijk met het maken van opnames begonnen kon worden. Voor een verbeterde versie is een gedeeltelijk antwerp gemaakt dat gebaseerd is op een efficientere schakelende voeding en een snellere flitsbuis. De software is opgedeeld in een aantal opeenvolgende sub-modules. Allereerst wordt het beeld verbeterd door het uitrekken van het histogram. Dit nieuwe beeld wordt vervolgens in horizontale richting afgetast waarbij gebieden met veel en sterke gradientwisselingen gezocht worden. Binnen de gevonden gebieden worden vervolgens de zwarte lijnen geaccentueerd door filtering waarna het algoritme objecten ter grootte van een karakter zoekt. Aan de hand van de positie en het aantal van deze wordt de exacte positie van het nummerbord bepaald. Door een bilineaire transformatie wordt het nummerbord naar een horizontale positie met aile karakters in een rechtopstaande positie getransformeerd. Hieruit worden wederom de karakters geselecteerd waarna deze aan de OCR-machine worden aangeboden. De herkenning van de karakters gebeurt in twee stappen. De OCR routines doen een eerste classificatie aan de hand van structurele en topologische eigenschappen. Voor ieder mogelijk karakter is aan de hand van een prototype voor een aantal eigenschappen de mate bepaald waarin deze aanwezig zijn. De som van de verschillen tussen de eigenschappen van het te testen karakter en een prototype geeft een indicatie van de mate van overeenkomst tussen deze karakters. Het prototype waarbij de kleinste som gevonden wordt, heeft de grootste overeenkomst met het geteste karakter; het object wordt dan ook geclassificeerd als het karakter behorende bij de klasse van het gevonden prototype. Na deze eerste classificatie worden makkelijk verwisselbare karakters opnieuw geclassificeerd waarbij uitsluitend de meest kenmerkende eigenschappen van deze karakters gebruikt worden. De meest voorkomende segmentatiefouten worden veroorzaakt door zichtbare bevestigingsbouten en door overstekende randen waardoor de karakters niet meer van de achtergrond te onderscheiden zijn. Tijdens de karakterherkenning worden vooral karakters als de "8" en de "8" verwisseld. Het totaal aantal herkende nummerborden ligt op dit moment op ongeveer 90%.
-78-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.G.A. van Meer Rapport nr: JCS EB 677 12 februari 1998 Building distributed Smalltalk!Java applications using CORBA ir. S. van de Kuilen (ELC Object Technolgy, Capelle a/d JJssel) prof.ir. M.P.J. Stevens
Summary: Distributed systems and object-oriented programming are very much related to each other. The structure of object oriented applications is very attractive to distribute because of the fact that every component of the application, called "object", can be distributed, that is, can be moved to another physical location. This makes applications very flexible and opens new possibilities for development and functionality. The fast expanding Internet adds again another dimension to software distribution. Applications can be distributed now across the Internet, which makes the software accessible for everyone who has access to the Internet. Until now, for each application, for each programming language and for each platform, another distributed system existed, which was often not more than just customised TCP/IP communication between different components. In 1989, the Object Management Group (OMG) started a project to design a standardised distributed system called "the Common Object Request Broker (CORBA)". This distributed system reached maturity in 1996 and is designed independently of programming languages and operation platforms. The object's interfaces are described using an independent Interface Description Language (OMG IDL) which describes the operations, attributes, exceptions, constants, etc. of an object. By compiling the IDL to specific program language skeletons, the object's implementation can be added to the heterogeneous distributed system. All the objects added to the distributed system communicate using the CORBA Object Request Broker (ORB). Together with its services, the CORBA ORB ensures that all objects can find and use other object's services. The communication is done using the Internet Inter ORB Protocol (IIOP), also specified by OMG. In the graduation report, CORBA is used to build distributed applications with two different languages: Smalltalk and Java. Smalltalk is a proven object oriented language, which is very powerful for developing complex business logic. In contrary, Java, which is only two years old and very much supported by the internet browsers to add applications to the Internet homepages, is not mature enough for using it for complex applications. CORBA can bring these two features (a reliable proven language versus an Internet supported language) together by distributing the business logic (Smalltalk) and view logic (Java in an Internet browser) of the application. The CORBA's language independent architecture is discussed after which CORBA is focussed on the Smalltalk and Java object oriented languages. Features as the IDL Smalltalk- and Java language mappings, commercial implementations of CORBA for Smalltalk and Java, and performance measurements of CORBA are presented and discussed. To understand CORBA even better a description of the implementation of a Smalltalk server ORB is discussed.
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding:
Afstudeerhoogleraar:
B.G.F.A.W. Oversteegen Rapport nr: ICS EB 681 23 apri11998 Touch Screen Based Measuring Equipment, Design and Implementation prof.ir. M.P.J. Stevens dr.ir. A.C. Verschueren ir. R.C.H.M. Overkamp (Philips EED, sectie MMM) H. van Broekhuyzen (Philips EED, sectie MMM) prof.ir. M.P.J. Stevens
Summary: Today a lot of user interfaces are redesigned to satisfy the needs of industrial operators. Man machine interfaces are optimised to offer the users a great functionality. The main subject of the graduation thesis was to investigate the possibility to redesign an existing user interface for industrial production equipment. The thesis comprises three major parts. The first part deals with the selection of appropriate dialogs to use when operating industrial equipment. A comparison is made between all possibilities offered by several dialog styles. The results show that a menu system with direct manipulation is highly recommended, to satisfy the needs for a big group of different users. The literature also shows that the desired dialog is best performed with the utilisation of a touch screen input/output device. The second part deals with the constraints of user interfaces, if a touch screen is used as a communication media. A comprehensive inquiry has been made to design guidelines for a touch screen based user interface. The electrical characteristics of a touch screen device are important. Industrial environment is very harsh for delicate electronics. For present study the electric static discharges during measuring were of great importance. The surface acoustic wave technology proved to be a reliable touch sceen technology in those harsh environments. The last part of the report deals with the design of touch screen based industrial equipment. An Object Oriented method is used to design the measuring system. The implementation is done with visual C++. The design is made flexible and reusable to satisfy the demands of future equipment. The dialogs used are designed using the style guide made in the second part of the thesis.
-80-
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Begeleiding: Afstudeerhoogleraar:
M.L.J. Wijnen Rapport nr: Rapport nr: ICS-EB 7021 10 december 1998 Instruction scheduling in parallel DSP architectures. dr. Verschueren, ir. F. Vermeire {Philips) prof.ir. M.P.J. Stevens
Summary: In the Master's thesis research is described about how two different techniques for increasing performance of a DSP processor architecture can be combined in a single DSP architecture. These two techniques are: • Processing multiple instructions in a single clock cycle • Pipelining In order to do this a processor architecture model has been defined with multiple similar "execution units". Hereafter, this architecture model has been pipelined. After pipelining the model problems may appear when reusing results of previously issued instructions. These problems can be solved by switching instruction results to the input of execution units, while they are still in the pipeline registers. This means that additional hardware has to be incorporated in order to be able to do this. However, switching results that are not yet in the accumulators but in the pipeline registers is not always possible, which in practice means that the processor architecture has to "stall" for one or more clockcycles. In order to minimize the number of generated stalls, the technique of instruction scheduling is used. In effect, a piece of hardware called scheduler is added to the architecture, which determines what particular instructions can be executed best on what particular execution units in order to minimize stalls. Next, the scheduler takes care of a proper execution of these instructions. It is discussed how the scheduler can be simplified by smartly choosing which pipeline registers can be switched to which execution units. Last, a simulator has been built in order to verify if an architecture using a scheduler in order to solve pipeline conflicts can be built.
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Begeleiding:
P.J.H. Zinken Rapport nr: ICS-EB 691 27 augustus 1998 An intelligent sensor controller using Profibus prof.ir. M.P.J. Stevens ir. H.P.E. Stox, ir. J.P.C.F.H. Smeets (EIIips B.V., Eindhoven) Afstudeerhoogleraar: prof.ir. M.P.J. Stevens
Summary:
The new fruit grading system that has currently been developed by Ellips B.V. is based on a Profibus DP (PROcess FleldBUS Decentralised Peripheral) environment. The system consists of one master computer and several intelligent slaves; an 110 controller, a weight controller, a sensor controller and a diameter and color measurement system. In the Master's thesis the design of the sensor controller card will be discussed. The sensor controller board is a multifunctional board for the decentralised handling of position measurement, static/dynamic weight measurement, input/output control and fruit roll measurement. It features two high-resolution rotary encoders, a 24-bit sigma delta analog/digital converter, 8 isolated analog inputs and 4 isolated digital outputs. The sensor controller board is an embedded system based on an 80C310 microcontroller, it interfaces to the Profibus by an SPC4 Profibus controller. A Quicklogic FPGA is used to implement most of the Memory Management, the position decoding and to glue all other parts together. In the design the position measurement must be as accurate as 1/20th cup. The static weight measurement is performed with a peak-peak resolution of 16 bits, so accuracy is better than 0.2 gram. Dynamic operation can be performed at 14 bits and an accuracy of better than 1 gram (based on a 10 kg bridge). The input circuitry can handle most industrial encoders, proximity sensors and photo switches. The output circuitry is designed for the switching of relays up to 1.8 Ampere.
-82-
Leerstoel Ontwerpkunde voor Elektronische Systemen
-83-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
S.A.M. Hermans 10 december 1998 A dynamic memory allocator for a settop box .. dr. de Lange prof.dr.-ing. J.A.G. Jess
Summary: The graduate assignment implied the design and implementation of a dynamic memory management method for the e-4TV software stack. E-4TV is a project within the Philips Advanced Systems and Application Laboratory (ASA-Lab.) that delivers a global platform for digital TV receivers. An e-4TV digital receiver is an embedded system in the form of a settop box and is based on a MIPS processor core. he (layered) software stack is based on the real-time kernel pSOS and is developed using the Object Modelling Technique (OMT) and is mainly implemented in C++. The previously used dynamic memory management method for e-4TV entailed a number of drawbacks due to the fact that a lot of the pSOS memory allocation functionality was used. One of these drawbacks is memory loss due to coarse rounding of the requests. Other disadvantages are the complex API (Application Programming Interface) that is being applied and the memory overhead due to memory management. Also memory budgeting is required for the currently used memory management method which means that the memory usage of the e-4TV environment must be known in advance. The determination of these budgets is a laborious and error prone activity. The new dynamic memory management method that must be made must entail low internal- and external fragmentation figures, little memory overhead due to datastructures used for memory management and must have real-time predictable performance. Also less detailed budgeting is desired and a complex API must be avoided. The memory allocator must support multiple memory allocator instances managing different address spaces because it is desirable that (unpredictable) software modules, in memory, are separated from other modules. Another requirement is the possibility of detailed monitoring of memory usage. A literature study is performed to see if an existing dynamic memory management method can be applied for e-4TV. Four more or less general strategies are encountered namely sequential fits, segregated freelists, buddy systems and a handle based method using compaction. All these strategies have there specific drawbacks. Because of these drawbacks a dynamic memory allocator is made that is geared to the allocation characteristics of the e-4TV software environment. Only conventional dynamic memory management methods are considered. A conventional allocator cannot compact memory and once it has decided which block of memory to allocate, it cannot change that decision and the regarding block of memory must be regarded as inviolable until the program that requested the block chooses to free it. For the case of a conventional allocator it has been proven that for any possible allocation algorithm, there will always be the possibility that some application program will allocate and deallocate blocks in some fashion that defeats the allocator's strategy, and forces it into severe fragmentation. Therefore the dynamic allocation behaviour of the e-4TV stack is, to a certain extent, taken into consideration when designing the new allocator. The first preliminary conclusion that can be drawn is that the memory allocator must be tuned for relatively small request and for a ramp- and peak like allocation behaviour. There are, in general, two ways to solve the problem of external fragmentation: use memory compaction or simply do not use more than one block size in a memory area. Because the first method can not guarantee predictable performance and introduces a level of indirection the second method is applied. Holes that originate in a specific area can be reused for blocks of the same size preventing the area's from drowning in external fragmentation. These areas are build from equally sized reusable containers. Internal fragmentation, due to rounding, is reduced by introducing a tuneable range of values that are used to round to. This way the allocator can be tuned for a specific memory request trace or, when the memory trace is not known, a more or less linear range of values can be used. With a smaller intrablock difference, a better fit to the requested block size can be made, yielding lower internal fragmentation.
-84-
For the e-4TV environment a lot of small values are requested for, therefore a large number of small values are specified to round to. If a certain, relative large, block is frequently used, it can also be made a value to round to so that requests of that size are not rounded. Due to this strategy the internal fragmentation is reduced by an average of 70% compared to the currently used memory allocator. A dynamic allocator also must be very economic with regard to memory needed for administration. The allocator described here introduces very little memory overhead. The memory overhead per container is negligible (0.3%). Another source of memory overhead is freelists. For e-4TV there is no overhead as a result of freelists because they are implemented as linked lists with the nodes stored in the free dynamic memory space. The static overhead due to management data is reduced with 85% with regard to the currently used memory allocation method. In many allocators a suitable block is found by searching the freelist. As the number of free blocks grows, the time to search the list may become unacceptably large. Because, in the case of the new allocator, the blocks within a container are of equal size there is no need to order or search the freelist, therefore making an allocation a timeconstant operation (0(1 )). Freeing is also a time-constant operation because the identity of a returned address is determined by means of an (simple) address calculation instead of a possible search. Therefore the new allocator is about twice as fast concerning allocation and the deallocation process is about 7.5 times faster. t is up to the user of the memory allocator to decide which information must be monitored. Therefore a generic framework is set up to support monitoring. The information passed with a request is stored in a datastructure. The memory allocator does not need a complex API, only the size of the request is sufficient. Nevertheless the currently used API's are also supported for backwards compatibility.
-85-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.A.J.N. van Orsouw 15 oktober 1998 Feasibility or Multi Processor Architectures algorithms. prof.dr.-ing. J.A.G. Jess prof.dr.-ing. J.A.G. Jess
for
video
coding
Summary: To come to a cost efficient implementation of real-time video coding algorithms the overlap in their functionalities must be exploited. This can be done by implementing the algorithms in a reconfigurable multi processor architecture where the functional units can be reused by making them 'weakly' programmable. In this report the feasibility of the Prophid and C-Heap templates for the implementation of a processor that handles the functionality at the macro block level are discussed. A hardware software trade off must be made based on the different characteristics of the architecture templates. A separate implementation of the motion estimation unit is proposed. This unit is responsible for the search of a prediction block that is needed to remove temporal redundancy. Prophid is a high performance architecture that is hardware oriented. The flexibility that can be offered must be taken into account on compile time. Guaranteed bandwidth is offered by a TST (Time switch Time) network. Consequently, this architecture is efficient if the average communication between the functional units is close to the peak communication. This is the case for MPEG encoding decoding and transcoding if all tasks are implemented in dedicated functional units. The C-Heap architecture is based on a general purpose processor and the PI-bus. The performance of this software oriented architecture can be increased by porting tasks to functional units that are connected to the PI-bus. Variable Stream based communication is possible by initializing virtual channels and buffers between these tasks and the CPU. This programmability makes it easy to implement new features and to add new units to the existing system. The H.263 algorithm can be realized with this flexible architecture because the performance demands are not so high as for MPEG. To meet the performance demands of MPEG, several C-Heap systems could be placed in parallel, but the costs for this are high. The memory requests of the motion estimation unit are limited by the usage of the "early jump" algorithm and the overlap characteristics of predictions in combination with the 3DRS algorithm. To make this possible a flexible implementation of this unit is required. These new features desire low latency and high throughput access to memory. Therefore the motion estimation unit cannot be efficiently implemented within the Prophid architecture. The C-Heap architecture however offers more flexibility and if the performance demands can be met the motion estimation unit can be implemented in the C-Heap system. The most difficult part is to estimate the future developments in video coding, and therefore the amount of flexibility in an architecture. Architectures are proposed that have both soft and hardware to offer flexibility next to performance. The amount of software that is present in the architectures covers only the expected or proposed improvements in the near future to be able to adapt to new demands.
-86-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
A.W. Peters 15 oktober 1998 Automatic generation of on-line error detectors for linear analog systems. prof.dr.-ing. J.A.G. Jess, prof. Simeu {TIMA Gronoble) prof.dr.-ing. J.A.G. Jess
Summary: The demand for reliable electronic systems grows as their complexity increases. One of the tools available to ensure the system's well-being, is concurrent error detection. Methods for errordetection in the digital domain are well developed, however development for concurrent error detection in analog circuits has been hindered by the lack of good fault modeling. This report describes an automated method for on-line error detection in linear, time-invariant, electronic circuits. It is divided into two parts: the first part talks how to transform a netlist description into 1) an accurate state space model for nominal operation, 2) a model for unknown inputs or noise and 3) a fault model. The second part applies the model to the vast body of knowledge in control theory about dead-beat observers. These observers serve as error detectors because their output is zero if the available measurements for the circuit under test match the nominal behavior. But their output is non-zero if there is a mismatch. The nominal behavior is hardwired into the detector using analytical dependencies between the nodes under test. The algorithm described in this report ensures a maximum sensitivity to faults and a minimal sensitivity to unknown inputs or noise. The topology of the detection circuit for a given order is always the same. This also means that the overhead for using concurrent error detection is independent of the order of the circuit under test. For synthesizing a detection circuit, you only need the coefficients that characterize the circuit under test. These coefficients are generated automatically from the available data in the netlist.
-87-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.J. Rutten 15 oktober 1998 Modeling TriMedia 3D Graphics support. dr. J. v. Eijndhoven prof.dr.-ing. JAG. Jess
Summarv: The high demand on processing power and memory bandwidth of emerging new multimedia applications calls for optimal configuration of hardware and software. The PROMMPT3D project concerns the development of such a hardware and software solution for 3D graphics. The resulting graphics support is integrated in Philips' next generation TriMedia media processor. In the scope of the PROMMPT3D project at Philips Research, this thesis gives a comprehensive analysis of all aspects of the graphics pipeline, with regard to TriMedia graphics applications. The Open Graphics Language (OpenGL), along with the Mesa software implementation form the basis of this study. With these, we define a set of benchmark OpenGL applications that impose a workload on the proposed architecture that represent the class of 3D graphics applications the next generation TriMedia processor should support. Based on the set of OpenGL features that these benchmark applications support, the graphics pipeline is partitioned into logical graphics tasks. This partitioning lays the foundation for a simulation model of the graphics tasks that can be mapped on the candidate architecture, either in the form of dedicated graphics co-processors or as software on the core CPU. The simulation model uncouples the graphics pipeline from the architecture specification, thereby allowing a Y-chart approach where various mappings can be exercised at various levels of abstraction with limited effort. Here, we define a preliminary simulation model at the highest abstract level. This Kahn process network consists of five concurrent processes or tasks, namely the application, model-view transformation, shading, rendering, and the task responsible for writing the resulting picture to file for display. These five tasks can be further partitioned for detailed design space exploration of 3D graphis on TriMedia.
-88-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.M. Smolders 11juni 1998 Motion compensation analysis processors. ir. Tromp/ dr. v.d. Wolf (Philips) prof.dr.-ing. JAG. Jess
for
next
generation
TriMedia
Summary: The next generation of TriMedia processors must be able to decode MPEG-2 video at Main Profile at High Level (MP@HL). This application requires a lot of processing power as well as communication bandwidth. A proposed relief for the TriMedia-core (DSPCPU) is to implement computational intensive tasks in dedicated hardware. One of these tasks is the motion compensation function that reconstructs the current picture using previously reconstructed pictures and displacement information. In this report we study the communication workload that is related to the motion compensation function. In particular, we study the data stream that is used to transfer reference data from main memory to a motion compensation co-processor. Reduction of this (large) data stream is possible through reuse of previously fetched reference data. Based on actual MPEG-2 streams, theoretical upper limits of bandwidth requirements are determined. For realistic scenes we make an analysis of the amount of reuse and the corresponding bandwidth requirement reduction that may be obtained. To take advantage of reuse, reference data has to be buffered in the motion compensation co-processor, for which memory is required. We study how reuse and the corresponding bandwidth requirement reduction depend on the amount of motion compensation co-processor memory. A significant reduction in the bandwidth requirements may be obtained at the cost of a limited amount of memory. By introducing a cache like memory unit we study how reuse can be exploited in practice. We use our measurement results to judge the performance of different cache organizations and to propose a buffer strategy.
-89-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleid ing: Afstudeerhoogleraar:
O.L. Steinbusch 23 april 1998 Designing hardware to interpret Virtual Machine Instructions . ir. M.M. Lindwer (Philips Research) prof.dr.-ing. JAG. Jess
Summary: This Master's Degree thesis describes the project the author completed between June 1997 and February 1998 at Philips Semiconductors in Sunnyvale, California, USA. This project concludes the author's study of Information Technology at the Department of Electrical Engineering at the Eindhoven University of Technology. The thesis details the design and development of a modular hardware component called the Virtual Machine Interpreter (VMI). By using the VMI, programs written in Java Byte Code execute faster on a RISC platform. In this case the MIPS platform is targeted but any RISC platform could be targeted. The concept is applicable for any stack-oriented machine language but only implemented for Java Byte Code (JBC). A functional model of the VMI was designed at the Philips Research Laboratories Eindhoven (PRLE) and used in simulations to prove the feasibility and estimate the performance of the VMI combined with a MIPS processor. These preliminary results are analyzed, verified and amended. Additional research brings new results. In cooperation with PRLE, a model of the VMI core was designed on register tranfer level (RTL), using VHDL. The technologies described in this document are copyright protected by Philips Semiconductors. In addition, some concepts are patented.
Conclusions •
At the evaluated level of system integration, the VMI speeds up the execution of JBS by 4.0 times, compared to a software interpreter.
•
The VMI speeds up a typical large application for the embedded market by 2.6 times, because not all execution time is spent interpreting pure JBC.
•
The structural VHDL model of the VMI core is an RTL description of the functional C model of the VMI core. It correctly handles the same bytecodes the functional model can handle. This is an important step towards a hardware implementation.
•
The structural model of the VMI core will, with a few improvements, generate instructions fast engough not to endanger the performance gain as estimated in chapter three.
-90-
Leerstoel Elektronische Schakelingen
-91-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
E. Brockmeyer 23 april1998 Low power data transfer and storage exploration for MPEG-4 on multimedia processors dr.ir. F. Catthoor (IMEC Leuven), dr.ir. L.K.J. Vandamme (ERASMUS coOrdinator) prof.dr.ir. W.M.G. van Bokhoven
Summary: To reduce the huge amount of video and audio data, to be communicated and stored in multi-media applications, data compression standards are needed. The new MPEG-4 standard supports "true" multimedia applications including objects in audio and video. Several video objects with different size and arbitrary shapes from different sources can be combined on a single screen. The power cost is heavily dominated by storage and transfers of complex data types. Power requirements of these memory intensive applications can be reduced by optimizing the data tranfers (over long distances). Multi-media processors have specialized hardware to meet the high bandwidth and throughput requirements of these applications. The mapping in this work is targeted to a TriMedia processor (TM1000) so the intermediate memory sizes in the memory organization are fixed. The application program has to make efficient use of this given memory architectures. System design starts with a system specification. Typically an algorithm is used to meet the system specification. The algorithm choice has a very big impact on power, area and performance (cost) of the system. The chosen algorithm can be implemented in numerous ways; the design space to be explored is huge. The Data Transfer and Storage Exploration (DTSE) methodology of IMEC helps to explore systematically the system design space for data dominated applications before going to lower levels of the design. The methodology minimizes the memory storage and transfers in order to reduce the power (and area) requirements for multi-media application. The power consumption of an application is given by: P = C1oad * freal * Vswing * Vdd· We can reduce both freal and the effective C1oad by the way we use the storage and interconnect devices. The main goal is to arrive at a memory and transfer organization with the following characteristics: • • •
Least possible redundancy in data transfers (reduces freaiJ) Improved locality in the data accesses to that as manu as possible data can be retained in registers local to the data-path (reduces frea 1) A hierarchical memory organization where the smaller memories (with reduced C1oad) are accessed the most and larger ones are accessed the least.
Very promising results have been obtained: the methodology has led to a saving, in the number of main memory accesses, of a factor 81. The MPEG-4 standard is very flexible and many optimization parameters are data dependent and only known at run-time. One of the major problems, solved in this thesis, is how to use the resources efficiently and how to optimize them already at compile-time. This has been successfully done for the motion estimation modules.
-92-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.B.P. Esvelt 23 april 1998 A multicarrier modem architecture for VDSL. ir. J. Maris, irS. Vernalde (IMEC Leuven) prof.dr.ir. W.M.G. van Bokhoven
Summary: To provide high bit-rates to the customer a number of new access technologies is being developed: cable modems, optical fiber transmission systems, and Digital Subscribewr Line (DSL) technologies. This report concentrates on a Discrete Multitone (DMT) multicarrier implementation of the DSL technologies. DSL uses the twisted pair of the plain old telephone system. A simulation model is presented together with an implementation on a TMS320C40 Digital Signal Processor. Performance studies of this model show that 52 Mbit/s VDSL is possible using DMT.
-93-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
B.J.A. Frijns 27 augustus 1998 Intelligent Beam Current Control dr. L. v.d. Broeke, ir. V. Frequin (Philips}; ir. G. Persoon, dr. J.A. Hegt prof.dr.ir. W.M.G. van Bokhoven
Summary: One of the ways to improve the perceived picture quality, lies in increasing the contrast by allowing higher beam currents whenever (and wherever) possible. However, the physical limitations of the TV set then have to be monitored accurately (especially those concerning the picture tube and the line output transformer) in order not to introduce any (new) artefacts or shorten the lifetime of the set. Hence, it is important to measure the beam current properly. In order to achieve this, an all-integrated small-signal beam current control is introduced. Such an integrated approach has the additional benefit of lower cost. Furthermore, the performance of the various picture improvement ICS (developed at CSN) might be improved as well, as the discrete limiting circuits now designed by the set makers might obstruct their intended improvements. The trend in picture tube design is towards Akoca shadow masks again, as these tubes are some 30 guilders cheaper than lnvar tubes, although the latter ones are much less sensitive to local doming. In order to overcome this local doming problem, yet offering picture tube manufactureres and set makers this benefit of lower cost, it is important to design a powerful electronic local doming rpevention (ELDP) algorithm. Before introducting an ELDP algorithm, the various beam current control circuits currently used are to be optimised for their original tasks first. It is investigated how such limiting algorithms can be realised in the small-signal domain in order to be integrated in a video controiiC. Based on the small-signal r,g,b voltages, new draft algorithms have been designed to calculate rather than actually measure the (average and peak white) beam current. In addition, a new feed-forward soft clipper algorithm has been designed. Next, some recommendations on how to acutally control the r,g,b channels based on those calculator outputs, are given. It should be emphasized that the calculators mentioned, are only draft designs. During the investigations, it was of no concern whether or not the algorithms would be easy to implement. Therefore, before actually implementing them in a video controiiC, considerable effort has to be made to study the practical feasibility. The same holds for the control matrix which has to transfer the various calculator outputs to the different actuators in a proper way. As this transformation requires monitoring a lot of parameters, this appears to be quite an elaborate investigation in itself. However, as the results so far look promising, it is recommended to make additional efforts in actually implementeing the algorithms in a video controiiC.
-94-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R. de Ia Haye 12 februari 1998 A license plate recognition system - the design of a license plate recogniotion system for Dutch license plates dr.ir. J.A. Hegt prof.dr.ir. W.M.G. van Bokhoven
Summary: This report describes the design and implementation of a license plate recognition system for Dutch license plates. The system consists of fivertages. The front-end is formed by a high-speed shutter camera and a frame grabber that delivers the digitised images of cars passing by. In the license plate segmentation step, the approximate position of the four corner points of a plate is indicated by hand for the time being. This stage has already been automated but is not available to us. The corner points may not correspond to a rectangle area due to the perspective view distortion. A bilinear transformation that makes use of bilinear grey value interpolation is applied to correct for this. The result is a rectangular license plate with a size of 180 x 40 pixels. Histogram stretching is applied to enhance the image for the character segmentation stage, which approximately segments the characters, based on the properties of the vertical projection of the license plate. The resulting characters are normalised with respect to contrast, intensity and size. Then the characters are projected onto a low-dimensional space with the help of the Hotelling transform. This transformation contains the relevant information that is needed to distinguish the characters. The transformation depends on a good segmentation, which is not guaranteed by the segmentation stage. Clues about the segmentation accuracy can be obtained by comparing the inverse Hotelling transformed result with the original character. If they differsignificantly then the segmentation is probably bad. This leads to a much improved segmentation and thus a transformation that holds the needed information for the classification. The Hotelling transformed characters can be classified with different methods. A probabilistic neural network should result in the best performance, but it does not because of the limited amount of ample data. Classifying the transformed characters with the help of the uclidean distance proved to result in the lowest misclassification and rejection rate. 545 plates were used to test the system. A misclassification rate of 0.4% was achieved with a rejection rate of 13%. Further development of the system, for which a number of recommendations are given, is expected to increase the system performance.
-95-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M. van Heijningen 23 apri11998 DC Characterization and Noise Analysis of submicron Low-Power, LowVoltage CMOS Technologies ir. E.P. Vandamme (IMEC Leuven), dr.ir. L.K.J. Vandamme (ERASMUScOOrdinator) prof.dr.ir. W.M.G. van Bokhoven
Summary: This work deals with the DC characterization and noise analysis of submicron MOSFETs, produced with novel concepts, for low-power, low-voltage applications. First the DC behavior of these MOSFETs has been modeled with the BSIM3v3 SPICE model. The results have shown that BSIM3v3 is capable of modeling these novel MOSFETs very well. Also a special three-transistor macro-model has been investigated for transistors with halo implantations. This model also resulted in good simulations. The quality of the simulations depends largely on the quality of the parameter extraction routines. To obtain the best possible parameter set, an extraction method has been used, that allows parameters to be adapted and optimized by hand. The study of DC behavior has shown that novel concepts are necessary to fabricate submicron MOSFETs for low supply voltages. Often, novel concepts have a positive influence on one aspect of the DC behavior, for example the threshold voltage roll-off, but have a negative influence on for example the junction capacitances. For each new technology a well balanced compromise of several concepts has to be found to obtain the optimal MOSFET. The noise analysis has been performed for several reasons: noise measurements can be used as a diagnostic tool, to study the quality of the MOSFETs and to study the influence of noevel MOSFET concepts on the noise behavior. It has been shown that novel concepts can increase the noise by a factor 50. The use of an amorphous gate, instead of a poly gate, has been studied separately. This study has shown that the use of an amorphous gate, instead of a poly gate, increases the noise by a factor 5. The modeling of the noise with a unified noise model, that incorporates both number fluctuations and correlated surface mobility fluctuations, shows the best fits with experimental results over all operating regimes. Such a type of noise model is implemented in the BSIM3v3 SPICE model. The noise has been simulated in all operating regimes (subthreshold, linear and saturation), and has shown very good agreement with the measurement data. In order to extract good parameter values, necessary for the unified noise models, a novel two-parameter extraction method has been presented, using a few experimental results. It has also been shown that the implementation of the BSIM3v3 noise model in SPICE is inaccurate in the subthreshold region for our technology. This is due to a technology depended parameter that has been assumed constant in SPICE. This makes the SPICE noise simulations in the subthreshold region only valid for a limited set of MOSFET technologies. An improved BSIM3v3 noise model is proposed.
-96-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.J.W. Jonker 27 augustus 1998 Ruis als diagnosemiddel in de microelectronica dr.ir. L.K.J. Vandamme prof.dr.ir. W.M.G. van Bokhoven
Samenvatting: Gedurende de afgelopen afstudeerperiode is gwerkt aan het onderwerp Ruis als diagnosemiddel in de microe/etronica. Het afstudeerwerk is opgedeeld in twee facetten. Enerzijds het werk dat verricht is bij Alcatel microelectronics in Oudenaarde Belgie, anderzijds het werk aan automatische ruis meetopstellingen. Het onderzoek bij Alcatel microelectronics richtte zich op 1/f ruis in zogenaamde High Ohmic Poly silicon resistors (HIPOs) in relatie tot de vierkantsweerstand R0 . De ruis parameter (CusJ blijkt lineair afhankelijk te zijn van de vierkantsweerstand. De waarde van deze parameter bleek af te hangen van de gebruikte implant (B danwel P). De waarde voor Cus voor het met P geTmplanteerde preparaat bleek 1Ox grater te zijn dan de preparaten geTmplanteerd met B. Er is gebleken dat het belangrijk is te controleren hoe de relatie tussen de ruis en de lengte van het preparaat is, dit om niet verwaarloosbare ruisbijdragen van bijvoorbeeld de contacten te signaleren. Ten aanzien van een automatische ruis meetopstelling is gekeken naar een opstelling welke geschikt is voor het meten van 1/f ruis aan metaallagen en magnetische sensoren. Deze opstelling wordt gestuurd met National Instruments' LabVIEW en is in staat de bias-condities (stroom door het preparaat I extern magnetisch veld) automatisch te varieren. Het systeem is in staat gebleken volledig autonoom ruis spectra te kunnen meten, deze te processen en op te slaan gedurende een aantal dagen achtereen. Als onderdeel van deze opstelling is gekeken naar verschillende laagruisende versterkers in verschillende configuraties (kruis-korrelatie technieken en zogenaamde matching transformers). Hierbij zijn de verschillende versterkers gekarakteriseerd met behulp van twee parameters Req en fcomer.
-97-
CAPACITEITSGROEP ELEKTRISCHE ENERGIETECHNIEK
-99-
Leerstoel Elektrische Energiesystemen
-101-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleid ing: Afstudeerhoogleraar:
J.A. Kaland 15 oktober 1998 Black-box modellering voor laagspanningsschakelaars. ir. J.G. Sloot, ir. R.J. Ritsma (Holec) prof.ir. G.C. Damstra
Samenvatting: In opdracht van Holec Laagspanning B.V. en de Technische Universiteit Eindhoven is onderzoek gedaan naar de toepassing van black-box modellering voor laagspannings-lastschakelaars. Deze black-box modellen beschrijven het algemene afschakelgedrag van de schakelaar. Black-box modellen van schakelaars worden gekrakteriseerd door een aantal parameters, die worden afgeleid uit metingen van stroom en spanning tijdens een beperkt aantal (aldan niet gelukte) afschakelpogingen. Met deze parameters is het mogelijk om het thermische afschakelgedrag van de schakelaar ook in andere testcircuits te voorspellen en kan men tevens het effect van aangebrachte constructiewijzigingen nagaan. Dergelijke black-box modellen werden in het verleden veel gebruikt voor hoogspanningsschakelaars, de toepassingsmogelijkheid voor laagspanning is zelden onderzocht. Tijdens het eerste deel van het afstudeerproject, uitgevoerd bij Holec Laagspanning, is gezocht naar literatuur over dit onderwerp. Er zijn in totaal drie (basis)modellen gevonden, waaronder het Mayrmodel met variatie van parameters en het Cassie-Mayr model. Bij deze black-box modellen dienen parameters bepaald te worden. In totaal zijn er vijf methoden van parameterbepaling gevonden, waarvan er twee getest zijn. Dit zijn de methoden van Amsinck en Ruppe. Bovendien is de mogelijkheid om de parameters grafisch te bepalen onderzocht. In het testlaboratorium van Holec zijn inleidende metingen aan het afschakelgedrag van een laagspannings-lastschakelaar uitgevoerd. Vervolgens zijn meerdere prototypen van een nieuwe laagspannings-lastschaklaar, beschikbaar gesteld door Holec, in het Laboratorium voor Hoge Stromen van de Technische Universiteit Eindhoven bij diverse netomstandigheden beproefd. De bij de metingen verkregen data is vervolgens gebruikt als basis voor de modellering. Bij dit onderzoek bleken eenduidige modelleringen van de betreffende testschakelaars niet mogelijk. Hiervoor zijn twee redenen aan te voeren: het instabiele gedrag van de boogpositie rond stroomnuldoorgang en het relatief vaak voorkomen van dielektrische herontstekingen, terwijl de black-box modellen gebaseerd waren op thermische herontstekingen. Voor een conventionele vermogensschakelaar, met een meer stabiele boogpositie bij thermische herontstekingen, was de modellering echter wei eenduidig.
-102-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.A. Prins 27 augustus 1998 Detectiesysteem voor snelfe foutherkenning in gelijkstroomnetten, in combinatie met een hybride gelijkstroomschakelaar. ir. R.W.P. Kerkenaar I ir. A.M.S. Atmadji prof.ir. G.C. Damstra
Samenvatting: Een hibride schakelaar is opgebouwd uit een conventionele schakelaar (met contacten) en een halfgeleiderschakelaar. Hybride schakelaars worden voornamelijk gebruikt voor het onderbreken van gelijkstromen. Het onderbreken van een gelijkstroom met een hybride schakelaar gebeurt in twee fasen. De eerste fase is het onderbreken van de stroom in de conventionele schakelaar door het injecteren van een tegenstroom uit een geladen condensator om zo een nuldoorgang van de gelijkstroom te creeren; de stroom commuteert hierbij naar de halfgeleiderschakelaar. In de tweede fase onderbreekt de stroom in de halfgeleiderschakelaar. Om de condensator ten behoeve van de tegenstroominjectie klein te houden, moet een kortsluitstroom zo snel mogelijk getecteerd worden. Er worden in dit verslag twee detectiemodellen besproken voor het snel (< 100 J..lS) detecteren van kortsluitstromen. Het eerste model gebruikt de stroomsteilheid als parameter. Het tweede model gebruikt de stroomsterkte als parameter. Be ide mod ellen zijn in staat om binnen 100 J..lS een kortsluitstroom te detecteren. Voor het eerste model wordt een Rogowski-spoel als sensor gebruikt. Voor het tweede model kunnen diverse sensoren voor de stroomsterkte gebruikt worden; een voorwaarde echter is dat de sensor een grote nauwkeurigheid heeft (.0, 1%). Voor het hoofdstroomcircuit van de hybride schakelaar bestaan ook twee schema's. Het eerste schema van de hybride schakelaar, met een gecombineerd circuit voor de injectie van tegenstroom en voor de aandrijving van het elektrodynamisch openingsmechanisme van de gebruikte vacuumschakelaar, bleek niet efficient te werken: door de relatief trage aandrijving van de vacuomschaklaar en voor de tegenstroominjectie. De injectie van de tegenstroom gebeurt later (300 J..lS - 1 ms}, op een moment dat de vacuomschakelaar al open is. Het later injecteren van de tegenstroom heeft echter een moeilijker stroomonderbreking tot gevolg omdat de kortsluitstroom meer tijd krijgt om in waarde toe te nemen. Uit metingen blijkt dat het mogelijk om een stroom van 2 kA (met een stroomsteilheid van 1,1 AIJ..ls) in ongeveer 0,98 ms te onderbreken. Deze tijd is opgebouwd uit 1,10 ms voor de detectie, 0,23 ms voor het openen van de vacuomschakelaar, 0,12 ms voor het naar nul brengen van de stroom in de vacuumschakelaar en 0,53 ms voor het doven van de stroom in de halfgeleiderschakelaar. (Gestreefd wordt om stromen van 10 kA (10 AIJ..ts) bij 1500 V te onderbreken met een hybride schakelaar.)
-103-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
LW.A. Dorpmanns Rapportnr.: EG/98/894 15 oktober 1998 Onderzoek naar de invloed van decentrale opwekking op het elektriciteitsnet. ir. R.B.J. Hes (ENW, Alkmaar) ir. W.F.J. Kersten prof.ir. H.H. Overbeek
Sa men vatting: De laatste jaren is er in de elektriciteitsvoorziening een sterke opmars van decentrale opwekking waar te nemen. Door deze toename gaat het karakter van het transport/distributienet veranderen. Het huidige net is ontworpen op 'eenrichtingsverkeer", van grote centrales naar de gebruikers. Onderzocht dient te worden op welke punten de huidige ontwerpregels aangepast dienen te worden zodat de nieuwe situatie gerealiseerd kan worden. Het probleem wordt benaderd vanuit een bestaand ENW deelnet. In dit net zijn op verschillende manieren verschillende hoeveelheden decentrale opwekking aangebracht. Het niveau van aansluiten is gevarieerd; er is gekeken naar invoeding op de 10 kV rail, de 50 kV rail en invoeding op een nieuw 50 kV station. Daarnaast zijn verschillende manieren van invoeding beschouwd; de opwekeenheden leveren werkzaam vermogen en vragen of leveren daarnaast blindvermogen of draaien met cos q> = 1. Er is uitgegaan van een geprognosticeerde belastingssituatie rond 2020. Vervolgens is het net in elke variant volgens het 11 = 1 criterium aangepast waarbij uitgegaan is van 100% betrouwbare opwekking. De varianten zijn onderzocht op de gevolgen voor belastingsgroei, investeringen, netontwerp enz. Uit de analyses blijkt dat het handhaven van de spanningen op nominaal niveau geen probleem is. Daarnaast blijkt dat laag in het net aansluiten van decentrale opwekeenheden voordelen heeft voor de blindvermogenshuishouding en investeringen bespaart. Bij een groot aandeel decentrale opwekking wordt de betrouwbaarheid slecht, doordat het net 'te dun' wordt. Een ander nadeel van het laag in het net aansluiten van decentrale opwekeenheden is de bijdrage tot het kortsluitvermogen. lndien de decentrale opwekeenheden redelijk groot zijn, kan er een overschrijding van het toelaatbare korstluitvermogen optreden. Hiervoor dienen dan maatregelen genomen te worden. De blindvermogenshuishouding wordt verbeterd indien decentrale opwekeenheden blindvermogen aan het net leveren. Nemen ze daarentegen veel blindvermogen op, dan kunnen er compounderingsproblemen ontstaan door te grote faseverschillen tussen spanningen en stromen. Het aansluiten van decentrale opwekeenheden op de 50 kV rails in de bestaande stations, vooral indien de eenheden blindvermogen leveren, bespaart investeringen in het 50 kV kabelnet, geeft geen problemen met kortsluitvermogen en de betrouwbaarheid blijft goed. Aansluiten van decentrale opwekeenheden op een nieuw 50 kV station brengt hoge investeringen met zich mee en Ievert weinig voordelen op voor de blindvermogenshuishouding. Enkele van bovengenoemde aspecten hebben tegengestelde effecten. Het laag in het net aansluiten van opwekeenheden kan investeringen besparen. Er kunnen echter problemen ontstaan met de kortsluitvastheid van de installaties en de betrouwbaarheid kan verslechteren. Hiervoor zijn dan extra investeringen nodig. Hiermee moet goed rekening gehouden worden. Ook zijn er nog aspecten als beveiliging, stabiliteit van de opwekeenheden en het net, spanningsregelingen e.d. die nog verder onderzocht moeten worden.
-104-
I
'
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
M.F.P. Janssen Rapportnr.: EG/98/885 27 augustus 1998 Lokalisatie van eenfase-aardfouten in middenspanningsnetten met niet-geaard sterpunt. ir. W.F.J. Kersten prof.ir. H.H. Overbeek
Samenvatting: De elektriciteitsdistributie op middenspanningsniveau vindt in Nederland plaats door middel van ondergrondse kabels. Storingen daarin zijn vooral een gevolg van beschadiging door graafwerkzaamheden en van isolatiedefecten. lngeval van kortsluiting vindt automatische afschakeling door de beveiligingt plaats. Sommige middenspanningsnetten worden evenwel bedreven met niet-geaarde sterpunten van de stranformatoren. Een contact met aarde van een der drie fasen, een zogenaamde aardfout, veroorzaakt geen grote kortsluitstroom en vereist geen onmiddelijke afschakeling. De voorziening blijft intact maar deze toestand, die verhoogde spanningen op de nietgestoorde fasen veroorzaakt, is maar tijdelijk geoorloofd. De foutplaats dient te worden opgespoord zodat de betreffende kabel uit bedrijf kan worden genomen voor reparatie. Het bepalen van de kabelsectie met de aardfout is een tijdrovende zaak aangezien de 50 Hz foutstroom naar aarde onafhankelijk is van de plaats van de fout en slechts afhangt van de capaciteiten van het betreffende net. Wei kan door het vergelijken van de som der fasestromen in aile afgaande kabels de betreffende kabel worden opgespoord, maar dit geeft nog geen uitsluitsel over de kabelsectie. Daartoe dienen de door de kabel gevoedde netstations te worden bezocht om ter plaatse de somstroom te meten. Het afstudeerwerk behelst een onderzoek naar een methode voor het lokaliseren van eenfaseaardfouten in middenspanningsnetten met niet-geaard sterpunt. De methode maakt gebruik van de transienten in spanningen en/of stromen die optreden op het moment dat de fout ontstaat. De fout wordt gelokaliseerd door uit de frequentie van de transient de totale zelfinductie van het net te berekenen en aan de hand van een vervangingsschema hieruit de zelfinductie van de kabel tussen foutplaats en invoedingspunt. Met behulp van de kabelparameters is vervolgens de foutplaats te bepalen. Ten behoeve van het onderzoek is een representatief middenspanningsnet gemodelleerd met het rekenprogramma ATP. Daarin zijn aardfouten gesimuleerd op diverse plaatsen in het net. De daarbij optredende transiente verschijnselen zijn geanalyseerd. Alhoewel de methode in principe werkt, zijn er een aantal nog niet opgeloste problemen. Voornaamste probleem is op dit moment de nauwkeurigheid. Bij op enkele kilometers afstand gelegen fouten ontstaan afwijkingen in de orde van enige honderden meters. De oorzaak hiervan moet zijn gelegen in de sterke demping van de transient en de ver doorgevoerde netwerkreductie die de methode vereist.
-105-
-106-
Leerstoel Hoogspanningstechniek & Elektromagnetische Compatibiliteit
..
-107-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
R.A.A. de Graaff 15 oktober 1998 Metingen aan het 25 kV systeem van de Luxemburgse spoorwegen. dr. A.P.J. van Deursen, ir. J.B.M. van Waes prof.dr.ir. P.C.T. v.d. Laan
Summary: During the graduation practical work I contributed to three measurement campaigns. The first was a set of measurements at the Hemweg power plant, to investigate isolated phase bus currents. The second was an investigation of the current distribution at the Luxemburg railways, powered at 25 kV, 50 Hz. The third was a similar in a medium and low voltage network in case of a single phase to ground short circuit in the 10 kV medium voltage system. In all three experiments I contributed in the preparation of the differentiating/integrating measuring system, carrying out selection and calibration of components, field checks, and recording of the data. The first measurement was a cooperation between G. Pemen (KEMA) and J. van Waes (TUE). The goal was to determine all the currents flowing through the isolated phase bus of a 650 MW generator. The measured currents were as expected and the temperature of earthing conductors did not exceed the specifications. The last experiment was a cooperation between the power distribution company NUON, and the group EHC, as a part of the PhD research of J. van Waes. A TT and a TN grounding system were compared. Relevant voltages and currents were measured simultaneously at five positions, three in the 10 kV system, two at the power connections of nearly houses. The measurements were originally planned in June 1998, but were delayed to September because of unforeseen circumstances. The data are reported elsewhere. Because of the delay, this report is not yet finished at the time of writing, but is to be considered as a part of my graduation work. The investigations along the Luxemburg railway system were a cooperation between Holland Railconsult, who initiated th study, NS Technisch Onderzoek, NS Raillnfrabeheer, Luxemburg Railway Company and the group EHC. There were several goals: 1. A test case for the larger investigation of the 25 kV, 50 Hz fed Havenspoorlijn, and indirectly also for other 25 kV systems planned in the Netherlands. 2.
To prove the suitability of the differentiating/integrating measuring system for 25 kV traction systems. Various current and voltage sensors were developed; their correct operation was verified. Improvements for future investigations can be formulated with the data now available.
3.
Luxemburg Railways were interested in data about the current distribution in a simple directly fed catenary, in comparison to an AT power distribution with negative feeder.
4.
Finally to provide input for the modelling of such a power distribution system actual measurements are of great help
Preparation of the sensors and measurements took several weeks of intense labour in June. The data were gathered in week 27 of 1998, during night and day measuring sessions. This graduation report contains a detailed analysis of the measurements results, and of their accuracy. Factors influencing the accuracy are summarized. To a lesser extent, conclusions about the current distribution for both power distribution systems will be given. The current distribution depends also on the train position; several positions were measured. Some first results are presented. Particularly interesting is the current which is missed when the currents through all intended conductors are summed: catenary, negative feeder if present, rails and cables parallel to the track. The missing current must return through the earth. The data clearly show that the earth current is substantially reduced by the AT system. As far as we could check, no experimental data are available in the open literature.
-108-
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
P.A.H.J. Huijbrechts Rapportnr.: EH 98.A.154 10 december 1998 Betrouwbaarheid en optimalisatie van een gepulste corona reactor. dr.ir. E.J.M. van Heesch prof.dr.ir. P.C.T. v.d. Laan
Samenvatting: Aan de TUE te Eindhoven wordt er binnen de capaciteitsgroep EVT, Hoogspanningstechniek en EMC, van de faculteit Elektrotechniek onderzoek gedaan naar gepulste hoge spanningen, EMC en elektrische ontladingen. In het bijzonder wordt er onderzoek gedaan aan gepulste corona ontladingen. Binnen het Joule project werken we aan de reiniging van biogas. Hierbij worden teerdeeltjes verwijderd uit een heet biogas afkomstig van houtvergassing. Binnen het Energiened project passen we gepulste corona toe bij geurbestrijding en bij verwijdering van styreen. In het kader van het Nizo-onderzoek naar koude pasteurisatie van melk passen we hoge gepulste velden toe om micro-organismen te inactiveren. Hiervoor zijn er twee bronnen gebouwd die in een continu proces gepulste hoge spanningen kunnen opwekken tot 100 kV met een pulsduur van 100ns (tijdens de puis 50MW elektrisch vermogen) en een repetitie-rate van 1000 pulsen per second e. Getracht wordt om deze bronnen betrouwbaar te maken om uiteindelijk tot een industriele toepassing te komen. Aspecten hiervan zijn: optimalisatie van het elektrische circuit, de opbouw van data en duurzaamheid. Een van de problemen in het elektrische circuit is de snelle hoogspanningsschakelaar, een vonkbrug. Via deze schakelaar wordt de hoge spanning via een transmissie lijn transformator (TLT) aangeboden aan de corona reactor. De coronareactor is een 3.5m lange cilinder met een diameter van 25 em met op de as een coronadraad waar de hoge spanning gepulst wordt aangeboden. In de reactor ontstaat dan corona, intense gasontladingen, die tot gevolg hebben dater diverse radicalen ontstaan in het gas. Deze kunnen zorgen voor de afbraak van allerlei schadelijke producten zoals tolueen en styreen. De vonkbrug heeft een te grote spreiding in het schakelmoment wat tot gevolg heeft dat er verliezen optreden in termen van energie. Men kan deze verminderen door de vonkbrug op het juiste moment te Iaten schakelen middels een triggering. Hiervoor zijn diverse mogelijkheden bekeken waaronder een magnetische schakelaar gebruikt als transformator. Deze bleek het triggermoment goed vast te leggen met geringe spreiding. Een uitgebreide set duurtesten is uitgevoerd om een beter zicht te krijgen op de gedragingen van de bronnen. Het bleek dat de vonkbrug die uit messing electroden bestond al redelijk snel slijtage vertoonde. Er is toen gekozen om de elektroden te vervangen door een harder materiaal, namelijk CuW-elektroden. 9 Deze combinatie heeft nu ongeveer 10 schoten gehad zonder serieuze slijtage (pakweg 120kQ aan lading getransfereerd). Een ander aspect is dat het rendement nog verbeterd dient te worden. Er wordt nu een rendement gehaald van rond de 35%. De levensduur van de TLT-kabels is ook sterk verbeterd. Deze was eerst maximaal 100 uur. De huidige nieuwe kabels hebben al 380 uur zonder problemen gedraaid.
-109-
SAMENVA'ITINGEN AFSTUDEERVERSIAGEN FACULTEIT ELEKTROTECHNIEK 1999
De Technische Universiteit Eindhoven aanvaardt geen aansprakelijkheid voor de inhoud van de in deze bundel opgenomen samenvattingen van afstudeerverslagen.
INHOUD
CAPACITEITSGROEP TELECOMMUNICATIE TECHNO LOGIE & ELEKTROMAGNETISME Leerstoel Telecommunicatie ........................................................................................................ 3 Leerstoel Elektronische Bouwstenen ......................................................................................... II Leerstoel Elektromagnetisme .................................................................................................... I3
CAPACITEITSGROEP MEET & BESTURINGSSYSTEMEN Leerstoel Meten en Regelen ....................................................................................................... I8 Leerstoel Signaalverwerking ...................................................................................................... 27 Leerstoel Medische Elektrotechniek ............................................................................................ 32 Leerstoel Elektromechanica & Vermogenselektronica ............................................................. 33
CAPACITEITSGROEP INFORMATIE & COMMUNICATIESYSTEMEN Leerstoel Digitale Informatiesystemen ..................................................................................... 38 Leerstoel Ontwerpkunde voor Elektronische Systemen ........................................................... 52 Leerstoel Elektronische Schakelingen ........................................................................................ 57
CAPACITEITSGROEP ELEKTRISCHE ENERGIETECHNIEK Leerstoel Elektrische Energietechniek ....................................................................................... 6I Leerstoel Hoogspanningstechniek & Elektromagnetische Compatibiliteit ............................ 63
1
CAPACITEITSGROEP TELECOMMUNICATIE TECHNO LOGIE & ELEKTROMAGNETISME
2
LEERSTOEL TELECOMMUNICATIE
3
Naam kandidaat: Mstudeerdatum: Mstudeerproject:
Begeleiding: Mstudeerhoogleraar:
L. Brunken 15 juni 1999 Context Weighting in Wavelet Image Compression Dr.ir. Tjalkens Prof.dr.ir. G. Brussaard
Summary: Within the field of image coding, context is frequently used to compress images better. By using a context, one can estimate the probability of acoefficients value. If this estimated probability is close to the actual probability, the arithmetic encoder can compress the image almost as good as possible (entropy). The context often consists of nearby coefficients. However, it is not sure which contexts are the best forcompression purposes. The Context Tree Weighting (CTW) algorithm weights over all possible context tree models in order to estimate the probability of the next bit in the data stream. Often the context used here consists of the last few bits. The CTW algorithm works very good for text-compression. The question was if the weighting principles of the CTW algorithm could be applied to image coding. Using the Embedded Zerotree Wavelet image coding technique images were decomposed and compressed to a bit stream. This bit stream was further compacted by an adjusted CTW algorithm. The adjusted CTW algorithm uses a context which was derived with a Minimum Description Length algorithm andean deal with non-stationarities.
4
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
M. de Gier n oktober 1999 Modelling ARQ for a high-speed wireless ATM based LAN Dr.Ir P.F.M. Smulders (TUE) and lr.D. v.d. Meulenhof Pro£Dr.Ir. G. Brussaard
Summary: In het Advanced Communications and Telecommunication Services (ACTS) project MEDIAN wordt onderzoek gedaan naar een draadloos Local Area Network (LAN) op 6o GHz met een capaciteit van 150 Mbitfs, gebaseerd op de Asynchronous Transfer Mode (ATM). Voor dit doel is een demonstratiemodel ontworpen dat bestaat uit een basisstation en twee draagbare terminals. Om functionaliteit toe te voegen, zonder deze direct te implementeren in hardware is, met het bestaande demonstratiemodel als uitgangspunt, een simulatiemodel ontwikkeld. Dit model is ontwikkeld met behulp van het geavanceerde simulatie-softwarepakket OPNET. Een reeds toegevoegde functionaliteit is han dover. Dit is de overgang van de communicatie van een draagbare terminal van het ene naar het andere basisstation, doorgaans als gevolg van het verplaatsen van de draagbare terminal. Het doel van het afstudeerproject is het toevoegen van een andere functionaliteit, namelijk hertransmissie. Hoewel in het demonstratiemodel een Forward Error Coding (FEC) wordt toegepast die tot een zeer acceptabele celfoutenkans leidt, is deze foutenkans nog niet laag genoeg voor sommige ATM service classes. Verwacht wordt echter dat het toepassen van hertransmissie 'goedkoper' is in gebruik van bandbreedte dan een verzwaarde FEC. Om de invloed van hertransmissie, ofwel ARQ, te verifieren met name voor real-time verkeer is door middel van een zeven staps ontwikkelmodel een simulatiemodel ontworpen, dat toegevoegd is aan het reeds bestaande simulatiemodel. Daarbij is in eerste instantie gekozen voor een model dat een onbeperkt aantal hertransmissies toelaat. Teneinde het model te testen zijn een aantal simulaties uitgevoerd voor real-time gebalanceerd en ongebalanceerd verkeer. Daarbij is ook de interactie met andere systeemonderdelen bekeken zoals het bandbreedte toewijzingsprotocol (Time of Expiry, ToE). Met name de ATM cell End-to-End delay en de grootte van de buffers in het basisstation en de draagbare terminal zijn onderzocht. Daarbij is gebleken dat het ToE protocol nogal wat invloed op de End-to-End delay voor een nietfoutloos kanaal en ook op de prestaties van het toegepaste ARQ protocol. Verder kan geconcludeerd worden dat m.b.v. hertransmissie nagenoeg foutloze transD:lissie mogelijk is voor real-time verkeer en dat de buffergrootte in zowel het basisstation als de draagbare terminals beperkt blijft.
5
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
A. v. Zelst n oktober 1999 Extending the capacity of Next Generation Wireless LANs Using Space Division Mutiplexing combined with OFDM Dr.Ir P.F.M. Smulders (TUE) and Dr.Ir.R.D.J. van Nee (Lucent Technologies) Prof.Dr.Ir. G. Brussaard
Summary: The main goals in developing new wireless communication systems are increasing the bit rate and increasing system capacities. Because the available frequency spectrum is limited, this means that future systems should be characterised by improved spectrum efficiency. Recent information theory research has revealed that the multipath wireless channel is capable of enormous capacities, provided that the multipath scattering is sufficiently rich and is properly exploited. A possible way to exploit the multipath scattering properly is Space Division Multiplexing (SDM) or Space Division Multiple Access (SDMA). Basically, these techniques transmit different signals simultaneously on different transmit antennas. The parallel streams of data are mixed-up in the air, but can be recovered at the receiver by using the SDM algorithms proposed in this thesis, like the Zero Forcing or Minimum Mean Square Error method, with or without Decision Feedback Decoding, or the Maximum Likelihood Decoding (MLD) technique. When the multipath is exploited properly, both the data rate and the Signal-to-Noise Ratio performance can be increased. In this thesis the Bit Error Rate performance and complexity of the mentioned SDM algorithms are compared. It is shown that the BER performance of MLD (based on detection) achieves a diversity order equal to the number of receive antennas. This is (much) better than the diversity order of Zero Forcing for which it is shown in literature that it equals the number of receive antennas minus the number of transmit antennas plus one. The big disadvantage of MLD is that its complexity grows exponentially with the number of transmit antennas. For a reasonable number of transmit antennas (:s; 5), however, its complexity is shown to be comparable with other SDM methods. Furthermore, in this thesis, single carrier Space Division Multiplexing based on MLD is combined with the multicarrier technique Orthogonal Frequency Division Multiplexing (OFDM) to make the system more robust against delay spread impairments. A basic introduction to OFDM is given and it is described how to combine SDM with OFDM. This system is simulated in C++ using the parameters of the new IEEE 802.11 OFDM standard for wireless LANs. The delay spread channel is modelled by exponentially-decayed Rayleigh fading. By performing Monte Carlo simulations, Bit Error Rate performances for different antenna configurations and different delay spreads are shown in this report. To achieve a better performance, forward error correction coding based on a convolutional code is implemented. It is shown that, if a soft-decision input Viterbi decoder is used to decode this convolutional code, an even better performance is obtained. Therefore, the MLD algorithm is adapted in order to produce soft-decision outputs. From all the results presented in this report, it can be concluded that Space Division Multiplexing in combination with OFDM is a promising solution for increasing the bit rate and system capacity.
6
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
B.A.F. de Jager 3I augustus I999 Measurement oflinearity oflaserdiodes for use of analog Transmission of TV signals over a Graded Index Polymer Optical Fiber. Ir. H.P.A. vd Boom Pro£ G.D. Khoe
Summary: Polymer Optical Fibers are a promising alternative for glass fibers in Fiber To The Home and in-house applications. Due to their relatively large core diameter they allow easy handling and coupling, which is essential because these applications involve large numbers of connections. In order to use analog AM for TV transmission over this kind of fibers, it is very important to use laserdiodes with very good linearity of their light output power to current transfer function. The reason for this is that non-linearity of the laserdiode introduces both harmonic and intermodulation distortion products, that can interfere with the TV signals. For TV transmission applications, linearity is usually expressed in the Composite Second Order (CSO) and Composite Triple Beat (CTB). These describe the non linear distortion of respectively second and third order in a particular channel by measuring the combined effect of all non distortion products in that channel. Measurement of the CSO and CTB require the use of some specific measurement equipment. In case of unavailability of this equipment the CSO and CTB can also be estimated by measuring the second order harmonic distortion and third order intermodulation distortion, respectively. With these values and counting the number of all second and third order distortion products for the used frequency allocation, the estimation can be made. When measuring the second order harmonic distortion and third order intermodulation distortion of a laserdiode, first an optimal bias current and modulation index must be determined, because nonlinear distortion strongly depends on these values. Experiments have been carried out with a NEC NDL3320s visible light laserdiode. The optimal bias current is found to be 33,5 rnA and the optimal modulation index 2,5%. The second order harmonic distortion with these settings varies from -52 dBc in the higher frequency range(? I GHz), to -36 dBc near 280 MHz. The third order intermodulation distortion varies from -66 dBc in the lowest part of the frequency range(? 40 MHz), to -45 dBc near 360 MHz. The Carrier to Noise Ratio varies from 30 dB in the highest part of the frequency range(? I GHz), to 4I dB in the lowest part of the frequency range (? 40 MHz). These measurement results make this laserdiode, without special measures, unsuitable for use in a practical AM TV transmission system with at least 25-30 channels. Linearity oflaserdiodes can be improved externally, for example with predistortion, feedforward or feedback techniques. So far the results of these techniques as described in literature have proven little success and it is expected that better results can only be reached at cost of great complexity. Therefore it is advised to focus on the development and use oflaserdiodes with better linearity instead of trying to compensate for this linearity externally. Another solution for the problem of non-linearity oflaserdiodes is the use of different modulation techniques such as FM or digital modulation techniques. For these modulation techniques linearity demands are more relaxed, at the cost of the use of a higher bandwidth for each channel.
7
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J.B. Kwaaitaal 9 februari 1999 A Multi-Standard Simulation Platform for Hybrid Fiber/Coax Networks Ir. H.P.A. v.d. Boom, Ir. S. Pronk, Dr.ir. M. de Jong Pro£ir. G.D. Khoe
Summary: The liberalization of the telecommunications market leads to an increase in competition. CATV (community antenna television) network operators seek for new advanced services to offer to the customer. It is foreseen that various services will be provided over hybrid fiberfcoax (HFC) networks, such as telephony, internet access, (near) video on demand, interactive services. These services require upgrades of the network to enable bi-directional communication. They have different trafic characteristics and demand different quality-of-service (QoS) levels. To guarantee specific QoS levels, advanced scheduling and medium access control (MAC) algorithms must be developed. Standards for communication in HFC networks are becoming available at the moment. A better understanding of the performance differences between standards is needed. For these purposes, the Multi-Standard Simulation Platform for Hybrid Fiber/Coax Networks (MSSP) is developed. We concentrate on the following standards: Digital Video Broadcasting' (DVB), Digital Audio Video Council 1.3 (DAVIC), IEEE 802.14 (IEEE) and Multimedia Cable Network Standard2 (MCNS). These standards specify the physical layer and the MAC layer of an HFC network to standardize communication between head-end (HE) and network terminations (NTs), leaving a certain amount of freedom in implementation. We are mainly interested in upstream transmission (NT to HE), where the following mechanisms for medium access are availale: (r) ALOHA access, (2) contention tree access, (3) reservation access and (4) fixed access. DVB and DAVIC allow transmission of data and requests in ALOHA. DAVIC and IEEE allow transmission of requests in contention tree. The reservation access is granted as a result of the requests (request-grant mechanism). Fixed access is based on periodic grants. The MSSP is designed hierarchically, following a top-down approach. It consists of a number oflevels, which are described in a modular fashion. For flexibility and cost reduction is possible products, based on this system, we have designed a system with low complexity slave NTs that communicate with an intelligent HE. The MAC intelligence and the scheduling algorithms are therefore implemented in the HE. Addition of advanced scheduling algorithms in the HE should not impose changes upon the NTs. In order to simulated different standards within MSSP, we implemented standard-specifics by a number of parameters that can be changed for each simulation. The way in which NTs choose for a particular transmission method is based on the queue status at the NTs and a priority scheme. The scheduling of the upstream channel is divided in a bandwidth allocation part and a grant generation part. The latter determines the specific use of each time slot on the upstream channel. Advanced scheduling strategies can use information on connections and their QoS demands: (r) agreed at connection setup and (2) gathered by monitoring the active connections. Statistics on the contention processes can serve as input to schedulers to optimize allocation of bandwidth to different types of access. From the implementation process of the first scheduling strategies, we conclude that the simulation platform is a flexible tool to develop strategies for advanced scheduling and MAC. After designing the simulation platform, it was implemented in the simulation environment BONeS (a Cadence product). In this way, we were able to carry out simulations that compare the MCNS vs. DVBfDAVIC standards. We conclude that MCNS has two advantages over the DVB/DAVIC: (r) it makes better use of direct access (data in ALOHA) and (2) it has less transmission overhead costs. The results of this comparison plead for the extension of the simulation platform to support MCNS as well.
1
Recently, DVB took over the specification of DAVIC. For ease of reference, we will continue to use DVB to refer to the old DVB spcification. 2 Not implemented in MSSP yet.
8
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
M.C.C. Lakare 12-oktober 1999 By Cross-Gain Modulation Using a 1550 nm Semiconductor Optical amplifier Dr. H. de Waardt Prof. G.D. Khoe
Summary: Wavelength conversion based on cross-gain modulation in a saturated semiconductor optical amplifier is studied from the view point of measurements and simulation. To retrieve detailed knowledge of the system performance of the 1550 nm wavelength convertor the following aspects have been taken into account: Input Power Dependence Conversion efficiency Polarisation Dependence Extinction Ratio Max. Upward and Downward Wavelength Span Pulse form Conservation Target of the project was to examine an all-optical solution for wavelength conversion of the channel wavelengths of the KPN test bed "BOLERO" (1535 nm::;; A.::;; 1541 nm) to the wavelengths of the lOP test bed developed within the TTE-ECO group (1551 nm::;; A.::;; 1560 nm) and vice versa. As the conversion of only one channel was required, a conversion from 1541 nm to 1551 nm (and back) was studied theoretically and experimentally. Within this span, the output power extinction ratio was found as - 6 dB for a pump power setting considerably larger ( ~ 3 dB) than the probe power setting. The power penalty for downward conversion (1551 nm ~ 1541 nm) was found to be 1.5 dB, whereas for upward conversion (1541 nm ~ 1551 nm) a penalty of 5 dB was observed. This observations are according to the theory as the semiconductor amplifier shows strongly wavelength dependent saturation characteristics, given by the energy dependence of the occupation probabilities within the conduction band and the valence band. Qualitative agreement with a simplified model has been established. The pulse form was kept preserved in the wavelength conversion experiments.
9
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
B.M. Leenders februari 1999 MAl in CATV networks using CDMA cable modems. lr. H.P.A. v.d. Boom, lr. F.J.J. Kennis Prof.ir. G.D. Khoe
Summary: A CATV-network has a good architecture for interactive services. Performing bi-directional communication on such networks requires a return channel from users to central head-end. This return channel lies from 5 to 65 MHz. Because in such a CATV-network several subscribers have to use the same channel a multiple access technique has to be used. We focussed here on the Direct Sequence Code Division Multiple Access (DS-CDMA) technique. In this technique all subscribers are able to use the whole bandwidth simultaneously, because a unique code is assigned to the users, which makes it possible to discriminate between the users. Signals from one user interfere with signals from other users. This is called the Multiple Access Interference (MAl). In this report we are looking at the MAl in CATV-networks using CDMA cable modems. To research the MAl we could build a number of modems. Another possibility was to use the Arbitrary Waveform Generator (AWG). A program had to be written that could generate the output signals of all the users. The program was written in Matlab. Matlab was chosen because it was possible to load the output of the program into the AWG. We established that the program was able to generate the signals on one or more client transmitters. We generated signals for up to 16 transmitters and measured the Bit Error Rate (BER) versus the Signal to Noise Ratio (SNR). With more users the BER becomes worse because of the MAl and the quantisation noise. We introduced the quantisation noise to the signals by the scaling process that is necessary to be executed before we are able to load the signals into the AWG. In the case of one two or four transmitters this quantisation noise is not very large and therefore will not degrade the system performance much. The penalty, measure for the system degradation, is as small as was expected. In the case of eight or sixteen transmitters the penalty becomes large. This penalty is not only due to MAl but also due to the quantisation noise that becomes larger with an increasing number of transmitters.
10
LEERSTOEL ELEKfRONISCHE BOUWSTENEN
11
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
F. Staals 31 augustus 1999 Modellering en ontwerp van quantum dot-lasers Dr. Th.G. van de Roer Prof. Dr. G.A. Acket
Samenvatting: Er is een literatuuronderzoek uitgevoerd naar modellering van quantum dot-lasers. Aan de hand van een aantal modellen uit de literatuur is een kwalitatiefbeeld gevormd van de eigenschappen en het gedrag van deze lasers. Een uitgebreid analytisch model voor de minimalisatie van de drempelstroomdichtheid van een quantum dot-laser is bestudeerd. Tevens is literatuuronderzoek verricht naar het ontwerp en de realisatie van dit nieuwste type halfgeleiderlasers. De wereldwijde ontwikkelingen zijn samengevat en besproken. Aan de hand van de theoretische modellen en de ontwikkelingen van de techniek is een aanzet gegeven tot het ontwerp van een quantum dot-laser. Er is gekozen voor een golflengte van 1300 nm, omdat die geschikt is voor optische communicatie. Voor de groei van quantum dots is een techniekgekozen die leidt tot een voor quantum dot-lasers goede kwaliteit. Bovendien is gekozen voor een Vertical Cavity Surface Emitting Laser, omdat dit type laser tot nu toe slechts weinig voor quantum dot-lasers is gebruikt.
12
LEERSTOEL ELEKTROMAGNETISME
13
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Afstudeerhoogleraar, Summary:
W.H.B. Janssen 12 oktober 1999 An Embedding Approach to icattering by Circular Cylinder S.H.J.A. Vossen · Pro£dr. A.G. Tijhuis, Prof.drl A.P.M.Zwamborn I
Nowadays the quest for techniques which are able to look inside ian object is increasing. In some cases it is not possible, or undesired, to open the object to look i+side. For example, when a patient is treated with hyperthermia, it is desired to control the manner ~n which hisfher body is radiated. The temperature distribution inside the body could then be mon.ltored by determining the changes in dielectric properties at the locations of interest. To plan such tteatment, we must first determine the dielectric properties of several tissues inside the human bod~ and their temperature dependence. A number of methods are available to look inside an object, we cl uld use infrared, X-rays, sound waves etc. In this report we use electromagnetic waves. With the id of electromagnetic waves and an inversion technique we are able look inside an object body. A inversion technique is a method to calculate properties of an object under test. In the case of elect omagnetic waves, we typically find the permittivity and the permeability from the response of e object upon illumination by a known source. To perform inversion, first the response of a known object to an impressed source that illuminates a known object must be available. Determining this field is referred to as solving the forward or direct scattering problem. In our case, the object is a human limb inside a perfectly conducting water-filled cylinder. The water keeps the contrast of the object low with respect to compared to the surrounding medium; the human body consists for the major part of water. In addition, it can be used to control the skin temperature/ The perfectly conducting cylinder is needed to contain the water; and prevents radiation of electromagnetics waves to the environment and thus avoids EM I. Once the forward problem is solved we can start with the inversion technique. There are several techniques available for handling the inversion problem. Examples are Born approximation, distorted-wave Born iteration, modified gradient and nonlinear optimization. In this study, first the forward problem for an object in a homogeneous environment is treated. To solve this problem, an integral equation for the field inside the object is derived. From the solution, we determine the scattered field in the surrounding medium. Next, we take a look at the fields near the casing for the case when it contains only water. Finally, the field in the complete configuration is determined by combining the solutions for both sub-problems. The resulting procedure is highly efficient, and can be used in the context of the inverse-scattering methods mentioned above. Representative numerical results are presented and discussed.
14
Naam kandidaat Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
N.W. de Jong
7 ecember 1999 Currents in reinforcement structures and their influence on induction loops for the hearing impaired. Drs. J.G.A. van Riswick, Dr. U.C. Das and Dr.ir. Th. Kwaaitaal Prof.dr. A.G. Tijhuis
Summarry: In this report the effect of a wire grid in a reinforced concrete structure on the magnetic field above it is calculated. An equivalent impedance of a cylindrical wire has been introduced to derive an integral equation for the unknown current in the wire-grid structure, which results from incident field generated by a rectangular loop along which an alternating current is enforced. The integral equation is derived from magnetic vector potential, and is discretised by assuming a circular current in ech elementary loop of the grid. The secondary current along each elementary wire of an elementary loop is calculated as the difference between the currents in the two adjoining elementary loops. Once the current distribution on thegrid is known, the total scattered field is obtained as the summation of the fields radiated by the elementary loops in the wire grid structure. Representative results are presented and compared with experiment. The results indicate that the numerical approach developed in this report is suitable for building a complete design tool for the induction loops mentioned above.
15
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
J.W. Lobeek 15 juni 1999 Analysis of finite phased arrays including a non-ideal feed network and mutual coupling. lr. J.L.M. Buijnsters (Signaal), Ir. E.W. Kolk (Signaal), ir. M.C. van Beurden Prof.dr. A.G. Tijhuis
Summary: Two methods are presented to analyse finite phased arrays including a non-ideal feed network and mutual coupling between the antennas. These methods use Z- and S-parameters, respectively, to model the feed network and the mutual coupling between the antennas. For both formulations, methods are presented to solve the network model. The advantage of the S-parameter formulation is that the network parameters can be measured directly. Therefore, this formulation is chosen to analyse a finite line array, with a corporate feed and mutual coupling between the antennas. The network of this structure is analysed with the aid of a commercial microwave network solver (MDS). Two new network elements are introduced to simulate the entire network model. It turns out that MDS is a suitable tool for analysing large line or planar arrays. Numerical results are presented for the line array. Four different situations are introduced to analyse the influence of the reflection of the antennas and the mutual cou piing between the antennas. The different errors in the array factor observed in these situations are investigated. Amplitude errors degrade the array factor considerably more than phase errors. Such errors are mainly introduced by mutual coupling between the antenna elements. From a comparison with simulated results for an infinite line array, it follows that edge effects are primarily responsible for the introduced errors in the different situations. The method presented in this report can also be used to determine the high standing-wave "hot spots" within the feed network and the VSWR at its input under different terminal conditions. The results from the simulations are compared with experimental results. Approximate agreement can only be obtained at a single frequency. To increase the significance of the method, in particular the accuracy of the model and the influence of the feed network should be further investigated.
16
CAPACITEITSGROEP MEET & BESTURINGSSYSTEMEN
17
LEERSTOEL METEN EN REGELEN
18
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
Y.T. Tso 7 december 1999 High performance Model Predictive Control of a binary, high-purity distillation column. prof. Backx Prof.dr.ir. A.C.P .M. Backx
Summary:
In the process industry a new type of controller, the so-called Model Predictive Controller (MPC), has been developed to drive the process toward the best economic conditions. The main advantage of a Model Predictive Controller is the capability of dealing with constraints. But real-time optimization is computationally very demanding. As a consequence the currently applied commercial control packages of Model Predictive Controllers are not suited for high performance control, which means a fast tracking behavior and disturbance rejection for large frequency ranges. The Internal Model Control scheme provides inside in the relation between feedforward and feedback control. In case of a perfect model, the Internal Model Control result in a feedforward scheme, so it can be design as such. The controller can be seen as an approximate of the inverse of the process. A weak aspect of applying a Model Predictive Controller is the real-time optimization, which demands a very high computationally load. For control of processes with fast dynamics, this results in a bad tracking behavior and disturbance rejection for large frequency ranges. To ensure high performance control we also need to cope with the fast dynamics of the process. This is the main motivation to introduce a new control concept, which enables high performance control. The basic idea to deal with the fast dynamics of the process is to introduce an inner-loop in the control structure with the Model Predictive Controller. The fast dynamics of the process will be controlled by the inner-loop, so inner-loop will be tuned for maximum performance. Because of the advantages of the Internal Model Control scheme, the inner-loop will be designed with the Internal Model Control scheme. The new control concept results in an improved tracking behavior and disturbance rejection for large frequency ranges. To evaluate of the performances of the developed technique, we applied the developed control concept to a binary, high-purity distillation. The binary, high-purity distillation column is a highly non-linear, illconditioned process. We use this process to determine the performance of the new control concept. The commercially available INCA® controller of Ipcos Technology will be used as Model Predictive Controller.
19
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
H.B.G. Derksen 20 april 1999 Eddy Current Field Compensation. Design and implementation of a control system in MRI systems. lr. V. van Acht, Ir D. de Bruin, Ing. A. Machielsen, Ir.W van Groningen Prof.dr.ir. P.P.J. van den Bosch
Summary: Concluding the Master of Science study in Electrical Engineering at the Eindhoven University of Technology, this Master project has been carried out in association with Philips Medical Systems. Subject of the project is to study the possibility of designing and implementing a control system to automate the compensation of Eddy Current fields in a MRI system. Eddy Current fields are induced in MRI systems due to pulsed magnetic field gradients. Compensation is done by pre-emphasising the current which induces the desired gradient field. Two methods have been studied to identify the Eddy 'current influences. One includes system knowledge: compact system identification, the other does not include system knowledge: black-box system identification. Compact system identification requires a model for the Eddy Current field. This model is identified in continuous time. The black-box identification is a discrete time identification, the model is dictated by the chosen black-box identification (e.g. Output Error, Box Jenkins, Steiglitz-McBride, etc.). For both methods computer simulations have been done on real measurement data. This gave very satisfying results. Compact system identification has been implemented and tested. This resulted in compensation of Eddy Current field within system requirements. Black-box identification has the advantage that no a priori knowledge of the system is needed, especially for future developments in the MRI systems. Implementations can vary from a software implementation or the use of a DS P to a total hardware implementation. The test of the compact system identification was done using a DS P. Main result is that a control system can be used to automate the compensation of Eddy Current fields. Black-box identification appears to have the advantage for future developments, so it is recommended to further study and test this method.
20
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
F. Erol 15 juni 1999 NonLinear PID control using Hammerstein or Wiener models. Dr.ir. Y. Zhu Prof.dr.ir. P.P.J. van den Bosch
Summary: The objective of this project is to study nonlinear process control using Hammerstein or Wiener models. First closed-loop identification of Hammerstein and Wiener model is discussed. The Wiener model consists of two parts: a linear dynamic model followed by a nonlinear static part. The Hammerstein model has the same parts as the Wiener model but in reversed order. The nonlinear PID controller structures for the Hammerstein and Wiener models are proposed. By compensating the nonlinear static part of these models by its inverse, a linear PID controller can be used for controlling these models. The linear PID controller is designed by using the Internal Model Control approach. The identification and control method is applied to a pH process. A mathematical representation of the pH process is derived. The pH process is modelled as a continuous stirred tank reactor (CSTR) by using mass balances. The derived model is not used for the purpose of controlling but for analysing the dynamic behaviour of that process. This process model can be approximated by a Wiener model. The pH process consists of two input streams of chemical solution, of which one input, a base solution is used as an input variable and the other a constant stream of an acid solution as a disturbance. The output of the process is the pH value inside the CSTR. The pH process is identified by using the SISO Wiener model identification algorithm and the obtained model is used for controller design. Simulations with the linear and nonlinear PID controller are done and compared to each other. Finally, the linear PID controller and the nonlinear PID are tested on the pH process. The control objective is to keep the pH value in the CSTR at a desired value. The nonlinear PID controller can control the process in a larger range than the linear PID can. Implementation of these controllers was done by using a program that is called LabView.
21
Naam kandidaat: Mstudeerdatum: Mstudeerproject:
Begeleiding: Mstudeerhoogleraar:
M.J. Groen 7 december 1999 Formulation of the Piecewise Linear Control of an Inverted Pendulum as a linear Complementarity Problem. M.Sc. PhD. A. Polanski Pro£dr.ir. P.P.J. van den Bosch
Summary: A popular subject for research in Control Engineering is the inverted pendulum. Strategies to swingup and balance pendulum in its inverted position have been described in literature many times. In this report it is shown that the pendulum can be swung up by means of the energy pumping method, a method widely used in research. Subsequently, the pendulum is balanced with a linear controller. The topic of this report is the analysis of the stability of the (controlled and uncontrolled) inverted pendulum with the aid of piecewise linear Lyapunov functions. For this purpose the state space is decomposed into disjunct triangular shaped cells, the triangulation. On this triangulation a piecewise affine approximation of the inverted pendulum is calculated. It is shown how a piecewise linear Lyapunov function can be defined upon this triangulation. A pendulum as a linear program (LP). This enables the system analyst to present the problem to an LP solver. This solver will produce a piecewise linear Lyapunov function that can be used to examine the stability of region in the state space. Matlab functions have been written to transform the differential equations of the pendulum to an LP problem. Additionally, a function has been written to display a 3-dimensional representation of the piecewise linear Lyapunov function found by the LP solver. Numerical examples of a few simple control situations are presented to demonstrate that the LP solver does find results that conform to knowledge about the actual stability of these simple situations. The generation times are acceptable on
22
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
J. Hermeler 31 augustus 1999 Automatic Vehicle identification: a common approach Pro£dr.ir. P.P.J. van den BoschfPro£ir. M.P.J. Stevens
Summary: This thesis is the result of a graduation project with EDS and Opel in Antwerp. The subject of this research is the AVI (Automatic Vehicle Identification) system, as it is currently integrated with the production process of the Opel factories. Based upon the real-time identification of the vehicles in process a certain amount of functionality is offered to the factory systems. The actual application of this system can be divided roughly in four parts. First of all, it takes care of the requests for order information at several different locations at the production line. Secondly, it drives the many material management systems. Thirdly, it follows the individual vehicles in the entire production process for administrative purposes. Finally, it stores the production data that is generated during the production process. At the moment, each different factory of Opel already has its own specific implementation of the AVI system. Although the name is the same, they appear to be completely different. First of all, they all use a different identification system, ranging from barcode system to modern RF-ID (Radio Frequency Identification) systems with programmable tags. The actual AVI functionality is strongly dependent on the choice of the identification system. Secondly, the integration of each implementation with the factory processes differs severely. The two central questions that are answered in this thesis are: "Define the common A VI requirements and the corresponding system specifications" and " Design a model ofa new A VI system that meets these requirements and that creates a common solution. To answer these questions, this research starts with the discussion of the common AVI requirements, like they are found after analysis of the different Opel factories in general, and the Opel Belgium Plant 2 and the Opel Eisenach in particular. These common requirements are expected to be sufficient to design a new common system that can replace each one of the current AVI systems. Based upon the common AVI requirements, the corresponding system specifications are constructed. The exact interpretation of the AVI requirements is discussed with respect to the current implementations. Wherever necessary, this interpretation is altered to suggest a more favorable solution. Based upon these results, the AVI system is completely redesigned. The choice of factory model has been dominated by the demand for a common solution. The used decomposition technique uses the observation that each Opel factory consists of a common configuration of elementary socalled 'factory' modules. The design of the new AVI system structure completely resembles these modular factory structures. The common AVI requirements are realized by distributing the system specifications over the defined factory modules. The resulting AVI module descriptions are used to define the final generic AVI modules. These generic AVI modules can be used to implement the AVI system as a common solution, to facilitate the needs of each factory.
23
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
Begeleiding: Afstudeerhoogleraar:
Chung-Yan Lam 9 februari 1999 A comparison of the Hybrid Systems Simulators: Chi, Prosim and Omsim Ir. W.P.M.H. Heemels Pro£dr.ir. P.P.J. van den Bosch
Summary:
Hybrid systems can be described as systems which contain both continuous and discrete dynamics. Many real-life situations can be modelled by hybrid systems. Because of the complexity of these . systems, we wonder which simulation languages andfor simulation programs are most suited for describing and simulating hybrid systems. Continuous systems can be modelled by e.g. ordinary differential equations (ODE) or differential algebraic equations (DAE). ODE are explicit equations and can be solved relatively easily, in contrast with DAE, which are implicit. Discrete systems can be modelled by e.g. Petrinets, finite state machines, or automata. Currently most simulation programs are oriented for solving either continuous systems or discrete systems separately, but not for mixtures. There are however, a few packages on the market which claim that they are able to simulate hybrid systems. In this thesis, we will investigate which (combination of) language/simulation package(s) isfare most suited for describing/simulating hybrid systems. Furthermore, since hybrid systems may contain distrete events (DE), time events (TE) and state events (SE), we also want to know how these events can be integrated in the different simulation packages.
24
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
J. Lammerts 31 augustus 1999 Automatic PID Controller tuning based on Closed-Loop Identification Dr.ir. Y.C. Zhu Prof.dr.ir. P.P.J. van den Bosch
Summary: PID control has been and will be the basic control strategy in the process industry. In modem modelbased multivariable process control, such as Model Predictive Control (MPC), most manipulated variables are the setpoints oflower level PID control loops. Developing design methods which lead to the optimal operation of PID controllers is therefore of significant interest. In this work PID controllers resulting from an IMC-based PID controller design procedure have been simulated in a commercially available controller configuration using first and second order process models common in industrial process control engineering. This revealed which controller calculation method should be used in the procedure for a specific process model to arrive at the 'optimal' PID controller (see Table 2-1). An IMC-based PID controller has been tested in practice on a laboratory pH process. To obtain an accurate model of this process, it has been successfully identified with software based on the asymptotic method of identification. Because this identification procedure also supplies a modelling error upper bound, interaction between process identification and IMC-PID controller design is possible. This is very practical; since an IMC-PID controller has only one tuning parameter, that relates directly to the speed of response and to the robust stability of the closed-loop. The IMC-PID controller test results clearly show that the performance of the calculated controller is in accordance with the theory of chapter two (IMC-based PID Controllers).
25
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
H.W.H. Theunissen 9 februari 1999 Analysis and control of a laser tracking system Dr.ir. A.A. H. Damen Prof.dr.ir. P.P.J. van den Bosch
Summary:
In the measurement and control group a laser tracking system is being developd to measure or calibrate the tool centre point of a robot. A laser beam is pointed at the centre of a mirror in an air bearing seat and deflected to a retro-reflector, which is attached to the end of the robot arm. The returned laser beam is split in two by a beam-splitter. One part passes the beam-splitter and is received in a laser interferometer, which measures the distance the laser beam has passed. The other part of the laser beam is deflected to a position sensitive device. This device is used to control the tilting of the mirror such that the laser beam hits the retro-reflector exactly in tis centre. By measuring the two angels of the mirror and the length the laser beam has followed the position of the tool centre point can be calculated. For this final thesis first of all the accuracy of the laser tracking system has been analysed theoretically. The effects of several kinds of inaccuracies have been calculated in a two-dimensional space. The inaccuracies are devided into inaccuracies caused by several kinds of displacements and inaccuracies of optical components and external influences. With displacements are meant movements of the mirror in its bearing seat and wrong alignments of the optical components. For all the inaccuracies caused by displacements the deviation of the distance the laser beam travels according to the ideal situation has been calculated. The same has been done for one angle of the mirror. From these calculations may be concluded that the deviations according to the ideal situation are acceptable. From the accuracy analysis can be concluded that it is importand to control the air gap of the mirror's semi sphere in its bearing seat. The air gap should stay on a constant value. Two Hoo controllers have been designed. One based on a seventh order model of the proces and one based on an approximated third order model of the process. The calculated controllers have been tested in Simulink. The simulations showed that the actuator does not saturate and that the air gap varies with an acceptable value for a certain bandwidth. For testing the controller on the real system, the controller has been implemented in a dSPACE system. The controller based on the thrird order model performed like the simulations showed. The controller based on the seventh order model could not be implemented because of a limited sample frequency of the dSPACE system. To control the two angles of the mirror, two angle controllers have been designed. Again there is chosen for Hoo controllers. These controllers have also been tested first in Simulink before implementation on the real system.
26
LEERSTOEL SIGNAALVERWERKING
27
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
D. Cornelissen 31 augustus 1999 Improvements of the flltered-X algorithm in adaptive noise canceling applications Dr.ir. PCW Sommen Pro£dr.ir. P.P.J. van den Bosch
Summary: The filtered-X algorithm is a frequently used algorithm in the field of acoustic noise canceling. This algorithm uses an estimate of the secondary path impulse response Hs to update the weights correctly. To increase the robustness of the system online adaptation of this secondary path estimate is desirable. Furthermore it is known in literature that if the phase-error between Hs and its estimate is larger than 90 degrees, for any frequency-bin, the filtered-X algorithm becomes unstable. If the secondary path or room acoustics change, online estimation of the secondary path becomes necessary for stable flltered-X algorithm. Before the research was aimed at this online modeling of the secondary path, the properties of the acoustic noise canceller (AN C) are discussed first and used in further research. Different techniques are described and a new online modeling technique is proposed. In this report it is shown that the reverberation characteristics of the room and the secondary path characteristics degrade the performance of the ANC. If the frequency spectrum of Hs contains frequency-bins where power is relatively low, at these frequency-bins the adaptation speed is slower and lead to a larger computational error so that the final attenuation also decreases. However if the number ofloudspeakers is higher than the number of microphones, the extra secondary paths can compensate for these frequency-bins and the performance is increases significantly, This is only possible if the secondary paths don't have common frequency-bins where the power is low. Current online adaptation techniques can be divided into two classes, namely techniques that estimate Hs by using additive noise, and overall modeling techniques without the use of additive noise. Disadvantage of the overall modeling techniques is the extra overhead that is necessary for the estimation of the primary path impulse response. Furthermore this method has difficulties when extending into a multipoint environment. The use of additive noise techniques is therefore preferred. However this method contains additive noise in the reference signal which may be audible. In the proposed new method the noise is added to the reference signal in each frequency bin separately in such a way that its energy is low compared to the reference signal level in that bin. In this way the added noise will not significantly disturb the reference signal in an audible way. Thus only those frequency bins of Hs are updated for which the reference signal contains relatively high energy. Furthermore it is easy with this method to keep the phase error for each frequency-bin below 90 degrees. Simulation results show that with this method variations in both the primary and secondary acoustic impulse responses can be modeled in a real environment.
28
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
J.P. van Gassel 12 oktober 1999 Aanalysis and synthesis of musical residual noise. Jr. Ritzerfeld Prof.dr.ir. P.P.J. van den Bosch
Summary: This report proposes a method for the analysis and synthesis of musicalresidual noise, used for audio coding purposes in a sinusoidal audiocoder, based on a nonuniform multi-scale filter bank. This musical residual noise is what remains after all tonal parts (sinusoids) and transient phenomena have been extracted from the audio signal. The design of the noise coder is based on the underlying principle that the human auditory system is not sensitive to the time- and spectral fine structure of noise within so called critical bands. The residual noisesignal can be sufficiently characterised in terms of its short-time energy on a critical band (ERB) scale. The filter bank design is compared to an existing implementation based on a Discrete Fourier Transform (DFT). The properties of this DFT-based system have been determined and were compared to those of the filter bank approach. It can be concluded that although the DFT-based implementations seems to yield lower time delays, the filter bank based implementation results inhigher quality synthetic noise signals. Keywords residual noise audio coding 3· sinusoidal coding 1.
2.
29
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
M. van Hest 20 april 1999 Multi resolution analysis of partial discharge signals. Dr. Bastiaans Prof.dr.ir. P.P .J. van den Bosch
Summa:ry To monitor the aging process of insulation material in power distribution cables, so-called 0.1 Herz measurements are carried out. By applying a high voltage to a cable, partial discharges (PD's) are caused. The PD's also occur under normal conditions, but take place less often. These PD's, which cause a concentrated amount of charge to travel through the cable, can be measured at the beginning of the cable. Due to reflections a number of pulses are measured. One PD-signal typically consists of three pulses, more or less sunken in noise. The position and size of the pulses tell us something about the magnitude and location of the PD; two parameters that are important. One measurement contains a few hundreds of those PD-signals. All of them need to be analysed. Together, they provide an indication about the condition of the cable. A lot oflarge discharges in one location indicates that the insulation material aged more than in other places. The analysis of a PD-signals consists oflocalising the first three pulses in the signal and is done by a human operator. If it is possible to automate the analysis, a considerable amount of time can be saved. The goal of this project is to investigate the possibility to automate the analysis of PD-signals by means of time-frequency methods. In this report three methods for detecting and localising these pulses are compared. Wavelet analysis, an inverse filter bank, and a matched filter bank. The methods are tested using test signals and real data collected during 0.1 Herz measurements. It will become clear that it is possible to automate the analysis of PD-signals. The best method appears to be a matched filter bank. Good results are achieved with this method. However, to be able to analyse all possible signals (for instance, signals containing more than one PD) we need to better understand the decisions made by the human operator. Further research will have to focus on this aspect.
30
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
J. Smeets 31 augustus 1999 Bandwidth extension of narrow-band speech. Dr.ir. PCW Sommen Prof.dr.ir. P.P.J. van den Bosch
Summary: In this M. Sc. thesis a project is described, in which research is done towards bandwidth extension of band-limited speech. Speech used for example in telephone communication (narrow-band speech) is band-limited from 300 to 3400 Hz. This means that a portion oflow and high requencies have been removed. This is the reason that telephone speech does not sound natural. To examine people's appreciation of various band-limited music and speech signals, listening tests were done to judge the sound quality of the presented signals. From the tests it could be concluded, that the wider the bandwidth of the signal the more it was appreciated. An exclusion to this (for both speech and music) is narrow-band plus only high frequencies. Next some theories concerning the human voice and signal processing are presented. Then the kernel of the system, the harmonics generator, is discussed. The generator consists of a nonlinear element (non-linear in frequency but linear in amplitude), which produces sub-harmonics and higher harmonics of the input signal. These sub-harmonics form a substitute for the low frequencies that are not present in the narrow-band signal. This only works for voiced speech (mainly vowels), because this has a harmonic structure. Extending unvoiced speech (non-tonal consonants) with this element does not result in forming a substitute of the original signal. But it appeared that it did not cause an annoying distortion. It appeared that depending on the harmonic structure of the signal the non-linear element's setting must sometimes be modified. When these setting are done manually for a female speech sample the result sounds very reasonable. Certainly when the loudness of the synthesised low part is adapted to that of the original low part. Furthermore, algorithms that can alter the element's settings automatically are discussed. The algorithms are based on the fact that the produced harmonics are (approximately) equal to those of the narrow-band signal. Hence the sub-harmonics will fit to the narrow-band signal. However, the algorithm does not work properly in all cases. Finally, some recommendations are given, which might improve the system.
31
LEERSTOEL MEDISCHE ELEKI'ROTECHNIEK
32
LEERSTOEL ELEKTROMECHANICA & VERMOGENSELEKTRONICA
33
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
P. Vreugdewater Rapportnr.: EMV 99-14 31 augustus 1999 Analysis and control of the undesired force of attraction in a specific linear motor lr. P.A.F.M. Goemans, Ir. J.L. v.d. Veen Pro£dr.ir. J.C. Compter
Summary:
Normal forces in a specific linear permanent magnet motor have been investigated. This linear motor is a construction of two parts: the first part is a moving part called the translator, consisting of a base frame, a plate on which permanent magnets are alternately positioned. The second part is the translator, consisting of a SiFe-yoke with a specific number of teeth. Coils are wound around the inner teeth of the translator. In order to generate the desired motor force, currents flow through the coils in the appropriate directions, with appropriate amplitudes. Beside the horizontal motor force, there is also a vertical force. This is the normal force to be investigated. Within this study: 1. calculations have been made using an analytical model, based on a simplified equivalent circuit; 2. calculations have been made using the finite-element-method; 3· measurements have been performed on a half-sized motor. According to the calculations as well as the measurements, variations occur in the attracting force between the two parts of the motor. A dependency on the relative position and on the current value is found. This changing attraction force causes vertical vibrations in the air bearings, which have a negative influence on its characteristics. To avoid this, we succeeded in finding a way to manipulate the variations in this attracting normal force. Still, further investigation on this phenomena is highly recommended before implemenation in an actual control system can be realized.
34
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
D.H.J. van Casteren Rapportnr.: EMV 99-13 31-08·1999 Dimming of metal halide lamps Dr. J.L. Duarte, Jr. W.D. Couwenberg (Philips Lighting) Prof.dr.ir. A.J.A. Vandenputflr. M.A.M. Hendrix
Samenvatting: Er bestaat een toenemende belangstelling om hoge druk gasontladingslampen te dimmen, met als doel de resulterende energiebesparing. Indien een HID lamp met een elektronische ballast in een vermogensgebied tussen de 50% en 100% wordt bedreven, worden elektrische instabiliteiten waargenomen. Soms hebben de optredende instabiliteiten het doven van de lamp tot gevolg. Het afstudeeronderzoek behelst een onderzoek naar deze instabiliteiten. De toegepaste HID lamp is een compacte hoge druk gasontladingslamp met metaalhalogeen componenten in de ontladingsbuis, het nominale elektrisch vermogen bedraagt 73W. De metaalhalogeen componenten bepalen de kleureigenschappen en verbeteren het lichtrendement in vergelijking met een klassieke hoge druk kwiklamp. De startfase is onder te verdelen in verschillende ontladingstoestanden. Na een korte periode (enkele minuten) komt een stabiele toestand tot stand met een bijbehorende temperatuurverdeling. Om het elektrische gedrag van de lamp te bestuderen in de stabiele toestand worden de fysische processen in de ontladingsbuis onderverdeeld in plasma en elektrode-effecten. Om een hoge druk gasontladingslamp te bedrijven zijn er verschillende mogelijkheden, ondermeer: DC, LF, HF en pulssturing. Een stabiele methode is laagfrequent blokgolf-sturing. Echter in gedimde toestand kan de lamp-ballast combinatie instabiel worden met het doven van de lamp als gevolg. Onderzoek naar de instabiliteiten, in gedimde toestand, Iaten twee hoofdproblemen zien: plasmainstabiliteiten en elektrode-problemen. De plasma-instabiliteiten worden veroorzaakt door de interactie tussen de lamp en de circuit-capaciteit; dit kan worden gemodelleerd met een tweede orde systeem. Nader onderzoek wijst uit dat na een stapvormige vermogensreducering de stabiliteitsmarge vermindert voor een HID lamp. Dit resulteert in grotere vermogenschommelingen, welke het doven van de lamp tot gevolg kunnen hebben. De elektrode-problemen zijn gerelateerd aan de elektrode-temperatuur, omdat deze de wijze van functioneren bepaalt. Voor nominaal vermogen kan worden verondersteld dat de elektrodes een geschikte temperatuur bezitten voor thermische emissie. Indien het vermogen van de lamp wordt gereduceerd kan de grens van correct functionerende elektroden worden bereikt. Als de grens is bereikt, wordt de situatie instabiel en dooft de lamp. Voor nader onderzoek is het elektrisch gedrag van de lamp geidentificeerd om een computersimulatiemodel op te bouwen. Gekozen is voor een grijze doos model, om de identificatie-procedure te kunnen splitsen in: statisch, snelle dynamiek en trage dynamiek. De voordelen van deze methode is een sterke reductie van de benodigde meetgegevens voor de identificatie en een grotere nauwkeurigheid over een uitgtestrekter lampvermogensgebied in vergelijking met zwarte doos modellering. De belangrijkste componenten van het elektronische ballast-circuit zijn ook opgenomen in het simulatiemodel, om de lamp-ballast-interactie te kunnen bestuderen. Een nieuwe regellus is ontworpen en de prestaties zijn vergeleken met de originele regellus. Om het lampgedrag te bestuderen in gedimde toestand is een extra terugkoppeling noodzakelijk. Vier signalen zijn voorgesteld, elk om een bepaald deel van het lampgedrag te bestuderen. Het doel van deze terugkoppeling is om het momentane minimale dimniveau te bepalen. Dit is in principe afhankelijk van het toegepaste type lamp en de ouderdom.
35
Naam kandidaat Mstudeerdatum Mstudeerproject Begeleiding Mstudeerhoogleraar:
P.E.G. Smeets 15 april1999 Structured Methodology for Optimizing a Half Bridge Resonant LLC Converter using MATLAB and Spice. Dr. J. Duarte, Ir. H.P.M. Derckx (Philips Lighting B.V.) Pro£dr.ir. A.J.A. Vandenput fIr. M.A.M. Hendrix
Summary: Manufactures of consumer electronics products nowadays are put under extreme price-performance pressure. Products have to be optimised to the highest level in terms of cost, size, environmental requirements and other issues. One of the circuits in those products in relation to the optimisation is the switched mode power supply (SMPS). Improvements in convential power supplies as momentarily used in the consumer electronics market (mainly flyback converters) are not to be expected anymore. In recent years the focus has shifted towards other topologies. One of the most promising topologies is the half bridge resonant LLC converter. A new development curve should be initiated with this topology. In spite of its apparent simplicity the optimisation of an LLC converter is rather complex. It is very difficult to translate qualitative optimisation criteria into concrete design parameters. Besides the difficult criteria translation, modelling techniques as discussed in literature do not satisfy the needs of a designer to have a fast but accurate method for designing an optimised LLC converter. One of those techniques is the first harmonic approximation (FHA). The first harmonic approximation is a simple technique that contributes enormously to the understanding of the basics of an LLC converter. However during optimisation, when details become important, the first harmonic approximation is not accurate enough. This report presents a structured methodology for designing an optimised LLC converter starting from the converter specification and customer optimisation criteria. Part of this methodology is a newly developed design tool called DiMaS. This powerful tool combines the mathematical strength of MATLAB with the complexity of circuit simulator SPICE. Based on FHA initial component values calculated with MATLAB, the design is optimised by using SPICE. Not only the FHA inaccuracies are eliminated using this tool, also parasites that can seriously disrupt the LLC converter behaviour can be taken into account. After discussing the problem area the first harmonic approximation is introduced. Next a methodology for extracting concrete design parameters out of qualitative optimisation criteria is presented. Based on these parameters, DiMaS starts the optimisation. The algorithms of DiMaS are treated in detail. Finally the obtained results are verified in several verification design.
36
CAPCITEITSGROEP INFORMATIE & COMMUNICATIESYSTEMEN
37
LEERSTOEL DIGITALE INFORMATIESYSTEMEN
38
Naam kandidaat: Mstudeerdatum: Mstudeerproject:
Begeleiding:
Mstudeerhoogleraar:
ing. H.A. Aalderink Rapport nr: ICS-EB 724 31 augustus 1999 WEB controllable devices, concept and design. Prof.ir. M.P.J. Stevens Dr.ir. A.C. Verschueren Ing. A.H.J.G. Lommen (TNO Industrie) lr. P. Koomen (TNO Industrie) Prof.ir. M.P.J. Stevens
Sununary: The Master's Thesis is written as part of the MSc course in information technology at the department of electrical engineering of the Eindhoven University of Technology. All work in the thesis is the result of a joined project with TNO (Netherlands Organisation for Applied Scientific Research) Institute of Industrial Technology. The thesis explores the concept of web controllable devices with respect to: Choice of protocol stack and location of functionality: Telnet TCP and 1Pv6 on the device and HTTP on the server which contains the web pages and CG !-script too. Choice of a pipelined data processor implementation of the protocol stack on the device. Discussion about Internet delay and the proposal of safety envelopes. Discussion of modelling the Internet delay in controlled systems. The design of the chosen pipelined data processor discusses: Testability of the designed protocol processor. The processing architecture. A general interface to data-communication circuits. A bus structure to process the extension headers in the order in which they appear in the 1Pv6 Payload package. A unit for insertion of payload length to solve the causality problem in pipelined encoders for variable length messages. The designed units are modelled in synthesisable VHDL to generate design feedback and functional prototypes using a rapid prototyping board populated with FPGA's developed within TNO.
39
Naam kandidaat: Mstudeerdatum: Mstudeerprojekt: Begeleiding: Mstudeerhoogleraar:
M.C.H. Baijens Rapportnr: ICS/EB 728 12 oktober 1999 Design of an MPEG audio layer 2 CO DEC on a DSP Pro£ir. M.P.J. Stevens lr. H. Kester, lr. J.P.C.F.H. Smeets (Ellips B.V., Eindhoven) Pro£ir. M.P.J. Stevens
Summary: To reduce the amount of personnel needed for the surveillance of a building with many different rooms, Ellips B.V. is developing a Multimedia Surveillance System. The system sends audio and video information to a central control room. The audio from the different rooms has to be transmitted over a network with limited bandwidth, therefore audio compression is needed. For the audio compression algorithm, MPEG audio layer 2 compression has been chosen. The audio compression and decompression will be done by a DSP56oo2 processor. The report will deal with the design of a real-time MPEG audio layer 2 CO DEC for the DSP56oo2. This involves the software design of the CO DEC as well as the determination what hardware is needed. The starting point will be the ISO/IEC example source that is distributed over the internet, called "dist 10", and the Motorola DSP56oo2 evaluation module, called "DSPs6oo2EVM". From there the design will involve the following steps: Get the source ready and functional in a PC environment. Also remove the unnecessary code (other layers, other psycho-acoustical model, etc.). Convert this source to a fixed-point implementation on the PC. Port the fixed-point PC implementation to DSP56oo2. Make DS P source real-time by implementing time critical parts in assembly. Adjust hardware where necessary during these steps. The MPEG audio layer 2 CO DEC has been designed for the DSP56oo2. The MPEG streams generated by the encoder have been verified and are valid MPEG audio layer 2 streams. The decoder has been tested with reference streams and it decoded those streams correctly. Listening to the CO DEC verified the audio quality to be quite good. To ensure high quality audio a bitrate of 128 kbits/s per channel is preferred, but operation on a lower bitrate of 64 kbitfs and thus a higher compression ratio still produces acceptable audio quality. For single channel operation with sampling frequencies of 22kHz and below, the CO DEC achieves realtime operation. For higher sampling frequencies and stereo operation only the decoder performs realtime. Because one channel mode and a sampling frequency of 22kHz are sufficient for the speech that will be transferred over the surveillance system, the goal of the project has been fulfilled. The required hardware for the CO DEC is a DSP56002 at least running at 66 MHz and 128 k-words (24bit) of 15 ns or faster S RAM.
40
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
Begeleiding:
Afstudeerhoogleraar:
D. Bohmermann Rapport nr: ICS-EB 723 31 augustus 1999 A functionally compatible Intel8o51 microcontroller soft-core in VHDL. Pro£ir. M.P.J. Stevens Dr.ir. A.C. Verschueren Ing. R.L.V. Niesten (TNO Industrie) Ir. P. Koomen (TNO Industrie) Pro£ir. M.P.J. Stevens
Summary: More and more functionality can be integrated on a single chip resulting in a fast growing IP (Intellectual Property) market. This growing market is possible because the number of gates per square millimeter is growing fast. Designers, however, cannot keep up with the growth in complexity. The use of cores and "design and reuse" are solutions for the designer to keep up with this growth. Cores, such as Microcontrollers and DSPs, are tested modules to enable "design and reuse", which is used by System on Chip (SOC) technologies. SOC technologies give the designer the opportunity to integrate their board designs into a single Application Specific Integrated Circuit (ASIC). Two types of cores are known, namely hard cores, which are technology dependent, and soft cores which are technology independent. The graduation project consists of the development and implementation of a soft core and is carried out at TNO Industry. The developed and implemented soft core, which is subject of this master's thesis is a functionally compatible Intel 8051 microcontroller. The datapath of the microcontroller is implemented with a three bus architecture, which is an easy and straightforward architecture. Also the Datapath is split up in an 8 and 16 part to enable simultaneous fetching and execution in the future. The Datapath is controlled by the Timing and Control Unit. Furthemore, parallel I/0 ports are implemented for communication with the outside world. Due to time constraints the Timer/Counters, the Interrupt controller and the serial port are not implemented. The microcontroller is modelled with VHDL in Summit VHDL. The result is synthesised with Leonardo Spectrum. Routing and Placement is done with Alera's Max2Plus. The microcontroller is programmed in a FLEX1oK5o FPGA, which is present on a rapid prototyping platform developed by TNO Industry. As expected the Datapath contains the longest delay path. The 8 bit Arithmetic Logic Unit, which is implemented in a straightforward way, consumes a large amount of logic cells and is part of the longest delay path. The Control and Timing Unit is implemented with large Look Up Tables containing a sort of microcode, which also consumes a large amount of logic cells, but makes it fast enough to control the Datapath without problems. The I/0 ports finally are not compatible with the standard I/0 ports. The standard ports contain internal pull-ups, which are impossible to implement in the FPGA. Extra Special Function Registers are added to indicate the direction of the port. The complete core consumes 1730 logic cells, which is 6o% of the chip area, and runs on 8 MHz. The core is approximately between 1.5 and 2 times faster, which is dependent of the program, than the standard microcontroller. The internal RAM, which is not counted in the amount of the logic cells, is implemented in Embedded Array Blocks, which are special blocks for memory modules. The same applies for the program memory. Future work should focus on optimising the design of the 8 bit ALU in the Datapath by increasing speed and decreasing the amount oflogic cells. The same applies for the Timing and Control Unit as far as the logic cells are concerned. Furthermore, the not implemented modules mentioned above have to be modelled and implemented.
41
Naam kandidaat: Mstudeerdatum: Afstudeerprojekt:
Begeleiding: Mstudeerhoogleraar:
M.W.C.M. Coenen 20 april 1999 Modelling and implementing a Can soft-core Dr.ir. A.C. Verschueren Ing. R. Niesten (TNO Industrie, Eindhoven) Prof.ir.M.P.J. Stevens
Rapport nr: ICS-EB 712
Summary: For use in the automotive industry the Controller Area Network (CAN) fieldbus idea was developed by Bosch. Nowadays, CAN buses are also used in medical devices and on several other fields where reliability of the network is of major importance. The CAN protocol is a protocol that is said to be "crashfree". During my graduation period this CAN protocol was modelled and implemented. The work is based on two reports of the Eindhoven University of Technology. First, the CAN protocol was studied, then two state machines were developed to implement the main functionality, the bit timing part of the Physical layer and the control part of the data-link layer. Around this several other parts operate to implement CAN. The most important features of CAN are the bit-timing, the arbitration and the fault control. Furthermore, CAN uses bit-stuffing, CRC, Acknowledgments and data fields of different length. It all has to operate at bit-rates between 2okbps and 1 Mbps. Another constraint is that TNO uses an FPGA in which the design has to be programmed. An FPGA has a limited space and the CAN design, together with a microprocessor and an interface unit have to fit in the FPGA. The final result is that the CAN model exists out of two main state machines, one for the physical and one for the data-link layer. Both are fully implemented in the tool Visual HDL. Simulations show that the design is working correctly. During this testing, several test frames were used to test the correct handling of a frame and exceptional cases like arbitration loss, bit-errors, frame-errors, CRC and bitstuffing. It still has to be programmed into the FPGA itsel£ However, all the synthesis steps in the tools Leonardo Spectrum and Maxplus2, to do that are successfully done and the speed constraints are satisfied.
42
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt:
Begeleiding: Afstudeerhoogleraar:
D. Cupic Rapport nr: ICS-EB 714 15 juni 1999 The IP Network Impairment Emulator Pro£ir. M.P.J. Stevens Z. Haraszti (Switchl..ab, Ericsson Radio Systems AB) Pro£ir. M.P.J. Stevens
Summary:
In the master's thesis report a tool was designed that can introduce arbitrary delay, loss and bit-error patterns in an IP "link". The tool was named descriptively "Network Impairment Emulator (NIE). Such a device can be used for emulating network impairments over an IP end-to-end connection and can mimic arbitrarily good or bad IP connections to the end protocols/applications. This provides a convenient and controlled way for validating protocol/application performance under different network conditions. NIE is implemented as a loadable kernel module in the FreeBSD UNIX operating system. The host that is equipped with two (or more) LAN interfaces, functions as a router, and forwarded packets are impared according to the configured loss, delay, and bit-error models. For each of the three impairment types there are a number of basic stochastic (random) models built in the system and the user can select and parameterize the models in an independent fashion. The model set can be easily extended. In addition to the stochastic models, packet loss and bit error emulation can be done using trace flies. NIE is a multichannel emulator, i.e., it can handle a number of "flows" with different impairment settings simultaneously.
43
Naam kandidaat: Mstudeerdatum: Mstudeerprojekt: Begeleiding: Mstudeerhoogleraar:
E.V. Molenaar
Rapport nr: ICS-EB 708
9 februari 1999 Modelling of a functional compatible and testable 8os1 soft-core Pro£ir. M.P.J. Stevens, Dr.ir. A.C. Verschueren Ing. R. Niesten (TNO Industrie, Eindhoven) Pro£ir. M.P.J. Stevens
Summary:
A compatible Intel 8os1 microcontroller soft-core had to be developed. Specific research at the Eindhoven University of Technology has taken place on this topic. The result of this research was the development of four different versions of the 8os1. The first design was not synthesizable. The second 8os1 was synthesizable but too slow. The third design was an enhanced version, which made it incompatible with the standard 8os1. Recently, a very fast pipelined version has been developed. The pipelined core was found too complex. This master's thesis describes the development of a three bus Central Processing Unit capable to execute instructions of the 8os1. A system architecture was defined, consisting of three buses. It was designed to execute all III instructions of the standard 8os1. The design was modelled and simulated with the Visual HDL tool. The design had to be testable and synthesizable into a Field Programmable Gate Array ALTERA FLEX IoKso. In this project, 21 of the III instructions were implemented and simulation proved that the CPU executed them correctly. The system was split into a datapath part and a controller part. The datapath consists of several components that were built from eight bit registers, eight bit tristate drivers, eight bit multiplexers and logic. Some additional components were added to complete the functionality of the data processing part. The datapath components are controlled by a Mealy type state machine. The controller was modelled into VHDL using two separate processes in the architecture body. One of these processes described the sequential part of the circuit and the other one described the combinational part of the circuit. The design is synthesized with the GALILEO tool targeted to the technology mentioned above. GALl LEO reports a maximum achievable clock frequency of 13.7 MHz. This clock speed of the present design can be further increased by optimizing the ADDRESSING UNIT. When extending the ALU with more functions, its delay may be dominant in the system's longest combinational path, forming a bottleneck for the clock speed. Thus one must take care where to extend the ALU. The design consumed 49% of the chip area. The CPU without its internal RAM consumed 16% of the chip area. The average Execution Time of the implemented instructions is II clock cycles. It cannot be reduced because it depends on the speed of the external program memory used. The fetch time is the bottleneck of the ET. This problem can be solved by introducing more parallellism into the system by making the instruction fetching independent of the instruction execution. The system performance lies between 0.76 and 2.74 MIPS. In the future the untested instructions must be verified on correct functionability. The ALU and the PROGRAM COUNTER MODIFIER components may be integrated as one component. The complete system must be further checked on correct functionality to increase the overall reliability of the design.
44
Naam kandidaat: Afstudeerdatum: Afstudeerprojekt: Begeleid.ing: Afstudeerhoogleraar:
M.J.J. Reumers Rapportnr: ICS-EB 710 20 april 1999 Developing and implementing a Peripheral Interface Controller soft-core Dr.ir. A.C. Verschueren Ing. T. Lommen (TNO Industrie, Eindhoven) Prof.ir.M.P .J. Stevens
Summar_y: More and more functionality can be integrated on one chip, because of technological development Currently, the number of gates per square millimetre is growing faster than the number of gates a designer can implement. A solution to this is the use of cores. The Master's thesis describes the development of a Peripheral Interface Controller (PIC) soft-core, a simple 8-bit microcontroller. The PIC16C5x soft-core is developed and simulated in Summit Visual VHDL The core is implemented on an Altera FLEX 10K FPGA. The program memory is implemented in Embedded Array Blocks (EABs). An EAB is a specially designed block to implement memory in. Programs for the core can be written with the standard PIC tools. These tools generate a HEX-file. A program is written that converts the HEX-file into a HEX-file that can be loaded into the EABs. Because every instruction, except program branches, executes in one clock cycle it is not possible to implement the data memory in an EAB. Now, the data registers are implemented in logic cells. In terms ofhardware this is expensive, because each register requires at least 8logic cells. To implement the data memory in an EAB, the instruction cycle has to be divided in more than one clock cycle or a FLEX1oKE FPGA has to be used. The implemented PIC-core runs at a maximum clock frequency of 13.5 MHz and occupies 902 Logic Cells.
45
Naam kandidaat: Mstudeerdatum: Mstudeerprojekt:
Begeleiding: Mstudeerhoogleraar:
B.D. Theelen Rapport nr: ICS-EB 713 april 1999 Towards Modelling Optical WDM Transport Networks Dr.ing. P.H.A. van der Putten Dr.ir. J.P.M. Voeten Dr. H.J.S. Dorren Prof.ir.M.P.J. Stevens 20
Summary: Compared to existing technologies, Wavelength Division Multiplexing (WDM) enlarges the amount of information transmitted via an optical fiber. Therefore, WDM provides a potential solution for the severe requirements involved in future communication networks concerning higher bit rates, longer distances with less delay, higher reliability and better cost efficiency. Because WDM merely incorporates functionality for transmitting bits without concerning meaning or intrinsic structure, full advantage of offered services could only be taken when WDM is efficiently supported by electrical techniques. To enable combinations of WDM and electrical techniques originating from Synchronous Digital Hierarchy (SDH), Asynchronous Transfer Mode (ATM) and the Internet Protocol (IP), a structured investigation of available functionalities is involved. Functionality incorporated in communication networks is commonly specified using layer models, which abstract from final implementation details. A general investigation of layer models regarding the concepts of functional architectures and network architectures shows a valuable correlation. Application of such concepts reveals an overlap between functionalities specified for SDH or ATM with functionality incorporated in WDM. On the contrary, specification of additional functionality is necessary to enable the combination of IP with WDM. As a result, electrical techniques originating from SDH or ATM may directly support WDM. Electrical techniques originating from the Point-to-Point Protocol (PPP) technology form an expectant candidate to construct the combination of IP with WDM. A (dynamic) reservation of channels offered by WDM enables a concurrent support ofSDH, ATM and PPP over WDM. Next to functionality concerning services offered to users, a communication network includes indispensable functionalities to perform network management. Various aspects make network management special. In addition to the concepts of network architectures and functional architectures, the Telecommunications Management Network (TMN) concept empowers the specification of functionalities regarding network management. Considering (concurrent) support of electrical techniques over WDM, utilisation of a concept like the TMN concept is a prerequisite to ensure an unambiguous realisation of network management. In advance of implementing a communication network, practical simulations entail modelling modules. A communication network of which the transport network is based on WDM therefore involves modelling WDM systems according to a practical view. Modules that may contribute to the composition .of a WDM system are classified in amplifier modules, addfdrop modules, cross-connect modules and channel-end modules. Although general aspects for modelling modules of communication networks are investigated, functionality implemented in amplifier modules is discussed in more detail as an example.
An additional objective of the graduation report is to associate terminology of various concepts available for communication networks.
46
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
T.J. Thijssen 15 juni 1999 Study of Turbo Codes for xDSL applications Dr.ir.F.M.J. Willems (TUE), T. Pollet (Alcatel), P. Atoine (Alcatel) Pro£ir. M.P.J. Stevens
Summan:y: In the past years, the capacity of the backbone of the telecommunication infrastructure has been significantly increased by upgrading from a copper network to an optical network. An increase in capacity in the access network could also be done by replacing the existing twisted copper pairs by optical fibres. However, over 700 million phone lines are installed all over the world and it would require huge investments to replace all these lines by optical fibre. It would therefore be interesting to increase the capacity and at the same time, still use the existing twisted pair. Therefore, xDSL systems are developed. With an xDSL system, it is possible to achieve high bitrate communication over the existing twisted pair. Current applications ofxDSL systems are high speed Internet access and video on demand services. The achievable bit rates for the most well known member of the xDS L family, Asymmetric Digital Subscriber Line (ADSL), are up to 8 Mbitfs downstream and I Mbitfs upstream. In order to support other applications such as transmission ofTV signals and inter- LAN connections, effort is being taken to further increase the bit rate ofxDSL systems. The latest technology in xDSL is VDSL, which stands for Very high speed Digital Subscriber Line. The key factor in the performance increase of xDS L systems compared to traditional voice band modems is the fact that besides the frequencies in the voice band, also higher frequencies are being used. The system uses Discrete MultiTone Modulation, where the frequency spectrum is divided into a number of small bands. On each of these bands, a QAM signal is transmitted. The size of the QAM constellation is optimised based on the signal to noise ratio in that particular band. The current channel coding scheme in ADSL is divided into two parts. A fast path is available for time sensitive data. In this path, a Reed Solomon code is applied. For less time sensitive data, an interleaved path is present. In this path, the data is encoded by a Reed Solomon code, interleaved and encoded a second time by means of Trellis Coded Modulation. In I993· Berrou et al. introduced a new class of codes called turbo codes. In a turbo code, a parallel concatenation of two recursive convolutional codes is used. The input stream is encoded twice, once in the original sequence and once in a randomised sequence. This randomised sequence is obtained by the use of an interleaver. At the decoder side, these codes are decoded by means of a Soft Input Soft Output algorithm. Reliability information is used at the input and provided at the output of the decoding algorithm. This reliability information is passed on to the next decoding step. Such an iterative decoding scheme yields a performance that is close to the Shannon Capacity Limit with an acceptable complexity. Due to the superior performance of these codes, a lot of research focused on this new class of codes in the last few years. In this project, the possibilities to apply to concept of turbo codes in an xDSL system are considered. The major design criteria are delay and complexity of the system versus the increase in performance. With a turbo code over BPSK modulation, a performance of about I dB from the Shannon Limit is possible at an acceptable complexity. The goal is to find a coding scheme for xDS L that can achieve a similar performance. In order to do this study, a simulation environment has been developed. With this simulation environment, it is possible to simulate both the current coding scheme of ADSL as well as new proposals. New proposals have been evaluated and some will be further investigated. At the time of writing this report, studies are still going on.
47
Naam kandidaat: Afstudeerdatum: Afstudeerproject:
Begeleiding: Afstudeerhoogleraar:
M. Verhappen
Rapport nr: ICS-EB 730
7 december 1999 System Level Performance Modeling of a Complex High-Speed Packet Switch -Modeling PRIZMA-T using POOSL Prof.ir. M.P.J. Stevens Dr.ir. J.P.M. Voeten ir. R.P. Luijten (1MB Zurich Research Laboratory) Pro£ir.M.P.J. Stevens
Summary:
Current telecommunication systems should be able to provide services to a wide range of traffic classes. These classes have different characteristics, but should be handled in a uniform fashion, for example by one type of switch. While system complexity increases with time, demands on the time-tomarket become stricter. There is no time to investigate all solutions. Therefore, an abstract system model is to be built that enables evaluation of certain system properties at an early development stage. The first project objective is to obtain knowledge on how to build models of complex communication systems. The second objective is specification of a model of the PRIZMA-T switch. PRIZMA-T is a lossless, self-routing, single stage switch and is being developed at the IBM Zurich Research Laboratory. Some requirements for this system are lossless switching, the minimum availability of bandwidth and internal switch resources and an appropriate best-effort discarding scheme, all in the context of multiple traffic classes. The model is specified in POOSL (Parallel Object-Oriented Specification Language), which is developed at the Eindhoven University ofTechnology. POOSL is a language with a fairly limited syntax but great expressive power. This leads to compact, discussible and efficient models. POOSL's mathematical semantics allow for formal qualitative and quantitative system verification. The modeler should select a modeling approach from countless alternatives. To facilitate this choice, several general modeling issues should be related to the system under investigation. These issues are: a modeling view on real-time and functional behavior, communication and concurrency, decisions about the modeling of time and packet flows, parametrizability and collection and presentation of simulation results. It is not possible to use analytical techniques for performance analysis of complex systems such as PRIZMA-T. The reason for this is the explosion of the system's state space. For empirical analysis, the Markov Chain Monte Carlo method is chosen and applied to the model. One simulation trace of the model provides a confidence interval for estimated metrics such as load, throughput, and delay. Future research should focus on confidence intervals for jitter and memory occupancy values. After general modeling issues are considered, valid abstractions of the architecture specifications are made. Abstractions can be divided into intuitive abstractions and abstractions from architecture structure, communication and concurrency. This process is supported greatly by the expressive power of POOSL and its underlying system level design methodology. The abstract adequate system model, that results from the modeling phase described above, is able to evaluate system properties of PRIZMA-T and can be used to support design decisions for future PRIZMA generations.
48
Naam kandidaat: Mstudeerdatum: Mstudeerprojekt: Begeleiding: Mstudeerhoogleraar:
A.D.M. van de Ven Rapport nr: ICS-EB 703 9 februari 1999 Using data-compression techniques to improve the performance of imageprocessing on a pc Prof.ir. M.P.J. Stevens, Dr.ir. F.M.J. Willems Ir. M. Krom (Oce Technologies B.V., Venlo) Prof.ir. M.P.J. Stevens
Summary
This research investigates if and how datacompression-techniques can be used to improve the feasibility ofimplementing the image-processing of a digital copier on a standard high-end PC. The image-processing as performed in digital (colour) copiers is traditionally thought of as very complex and computationally intensive. However, scanned images have a lot of redundancy, such as white areas, where the image-processing effort can be reduced significantly. This is where datacompression and image-processing meet. Datacompression uses a set of techniques to reduce the number ofbits required for the storage of certain information by using the "redundancy" in the original data. These techniques are made to find redundancies and the idea is to use them to reduce the computational complexity of the image-processing. Traditionallossless compression-algorithms such as run-length encoding, LZ77 and LZW perform, in general, poorly on scanned, noise images and are of little use for the improvement of image-processing speed. The more modem Wavelet-analysis and JPEG/DCf compression are better capable of dealing with these circumstances and are therefore investigated further. Wavelet compression uses the notion that information is present at multiple levels, resolutions. The Master's thesis shows that this notion can be used to improve the speed of the more complex imageprocessing operations by more than a factor two. JPEGfDCT compression uses a Discrete Cosine Transform and subsequent quantisation of selected frequency-components to achieve compression. While the steps of compressing and decompressing fromfto JPEG are rather expensive, investigation is made into how the DCf-data can be used for imageprocessing in a system that already uses JPEG for storage. The results show that only a (small) subset of the image-processing operations can be performed in the DCf-domain. In the light of digital copiers on PCs, the possible bottlenecks for image-processing on PC-architectures are investigated, along with some future trends in this area. The result of this investigation is that the most inhibiting factor is the microprocessor-core. Both the efficiency and the raw speed of the processorcore are improved in microprocessors such as Intel's Merced.
49
Naam kandidaat: Mstudeerdatum: Mstudeerprojekt: Begeleiding: Mstudeerhoogleraar:
F. van Wijk Rapportnr: ICS/EB 729 12 oktober 1999 A POOSL Model of the MASCARA Steady State Control Pro£ir. M.P.J. Stevens Dr.ir. D.R. Dams Pro£ir. M.P.J. Stevens
Summary: Parallel Object-Oriented Specification Language (POOSL) is meant for the specification of systems that are complex, reactive, concurrent, real-time and distributed. The SHESim tool can be used to construct POOSL models and run simulations on them. An abstract model on the Steady State Control part of the MASCARA protocol is developed in POOSL. The MASCARA protocol is a Medium Access Control protocol for wireless ATM. It is used within the European research project VIRES (Verifying Industrial REactive Systems). Since the MASCARA protocol as a whole is very large, a selection has been made. The selected parts of the MASCARA protocol are remodelled in POOSL on the basis of available informal specifications, source code written in SDL (Specification and Description Language, used within VIRES) and several Message Sequence Charts (MSCs). The original SDL source code is incomplete and contains modelling errors. Results of intermediate simulations of the POOSL model lead to suggestions for improvements. The development of the POOSL model also leads to extensive documentation on the modelled part of the MASCARA protocol. Furthermore, it serves as an example to examine the suitability of the POOSL language and the SHESim tool for the development of complex protocols. To some extent, formal languages SDL and POOSL are compared and simulation as an approach is evaluated. Several abstractions are made to grasp the essential behaviour. The underlying communication structure (protocol layers) is omitted, so that peer processes directly communicate with each other, and the data part of the protocol is also completely left out. Decisions based on certain data values are replaced with probabilistic decision mechanisms. And frequency channel and MAC address assignments are simplified. Simulation results tum out that the model seems to work correctly. However, error free simulation runs do not guarantee the correctness of the model. Rules of thumb are established regarding system parameters that indicate timer values. In spite of different terminology, specifications in both SDL and POOSL consist of a number of concurrent processes that communicate by exchanging information over channels. The SDL syntax and layout structure can be converted into POOSL syntax and layout structure quite easily. There are important semantic differences, however. The underlying communication mechanisms are completely different. SDL uses asynchronous buffered communication, whereas POOSL uses synchronising rendezvous communication. Furthermore, the grain of parallelism in SDL is situated at state transition level, whereas in POOSL it is situated at statement level. Those semantic differences are the main causes of the deadlocks that occurred during the development of the POOSL model. They are eliminated using ad-hoc solutions. Working with the SHESim tool appears to be very intuitive. The rendezvous communication mechanism makes the behaviour of POOSL models easier to understand and predict, compared with SDL. However, exhaustive as well as non-exhaustive verification of correctness properties is not supported. Because of this, the correctness of the behaviour that corresponds to a specific execution trace can only be examined manually by the user. Extension of the tool with the possibility of non-exhaustive verification would be an improvement. Furthermore, the possibility of guided simulation is recommended to be added to the tool. This enables the user to reproduce a specific erroneous situation in a simulation run more quickly. Finally, addition of the possibility to monitor
statistical properties of messages over communication channels would automatically provide the user with interesting figures of the model being considered.
50
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
J.H. Derikx 31 augustus 1999 "Roaming Java Services for Intelligent Networks" lr. van der Meer Pro£ de Stigter
Summary: When a subscriber moves abroad with his cellular phone he is no longer able to access his Intelligent Network (IN) services. There are several solutions to this problem. The CAMEL (Customized Applications for Mobile network Enhanced Logic) standard presents one. This report describes another solution: "roaming IN services". The IN services of a subscriber are stored in his home Service Control Function (SCF). Once he travels beyond the service area of his home network, into the service area of a visited network, his IN services are no longer accessible. The CAMEL standard solves this by establishing a signaling connection from the visited Service Switching Function (SSF) to the home SCF thus accessing the IN services. This report investigates the possibility of moving the IN service from the home SCF to the visited SCF and executing the service locally. The report roughly consists of two parts. The first part describes the research on the mobile code concept and the security involved with it. It resents an architecture for a "roaming IN services"-system. The second part describes the development and implementation of a prototype system using the Java programming language. Three mobile code paradigms are discussed: the "remote evaluation", the "code on demand" and the "mobile agent"-paradigm. Currently the mobile code concept is not implemented in the IN architecture. The prototype shows that it can be successfully integrated in the IN architecture. Moving code fragments to another computational environment entails a big security risk for both the host and the code. The host needs to be protected against the code and vice versa. Host security is thoroughly described in literature. This report describes the potential risks and the presented solutions. Examples of the latter are: digital signatures, security policies and an interpreter. Protecting code against a hostile host proves difficult. It remains impossible to prevent tampering without the use of secure hardware. All protection mechanisms are based on detecting possible attacks. Juristic measures may suffice to protect both host and code. Java has several mechanisms to protect the host against the code. It uses an interpreter in combination with protection domains and a security policy. Optionally a cryptographic extension can be used. The latter is not used in the prototype. Java does not provide measures to protect the code. The "roaming IN services"-architecture comprises four types of nodes: the SSF node, the SCF node, the Java Service Environment (JSE) node and the IN server node. The SSF node has the same functionality as in a normal telephone system.The SCF node is reduced to a relay node. The JS E node is the heart of the architecture and responsible for downloading and executing the IN service. The IN server node is a centralized server that holds all available IN services, service data and subscriber data. The prototype implements the basic features of the architecture. It uses the Rapid Service Prototyping (RSP) software, programmed in Erlang, to simulate a telephone system. The JSE and the IN server node are programmed in Java. The Jive application provides the communication between Erlang and Java. Java has several mechanisms to transport code. They were presented with their advantages and disadvantages. Java Remote Method Invocation (RMI) was selected to be used for the prototype. Java RMI provides the communication between the JSE and IN server node. The prototype is operational. When a subscriber registers in a network, detection points are transferred from the IN server to the JSE. These detection points are installed in the SSF when a call is initiated. When the subscriber triggers an IN service, the service is downloaded from the IN server by the JSE and executed locally. The detection points and IN services are cached in the JSE while the subscriber remains registered in the network. Java does not provide explicit means to unload classes. The use of RMI makes this process even more difficult. It remains unclear whether Java is able to reliably unload classes. RMI itself also has several disadvantages that justify research on other communication protocols. It also remains unclear how Java will perform on a heavily loaded system. Because the prototype uses the local network, it is not possible to determine how much the transport delays would increase in a real network. These subjects are left for further study.
51
LEERSTOEL ONTWERPKUNDE VOOR ELEKTRONISCHE SYSTEMEN
52
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
B.M.H. Arts ICS-ES 726 31 augustus 1999 Mistral2 to FACfS Prof.dr.ir. J.L. v. Meerbergen, Dr.ir. C.A.J. van Eijck Prof.dr.ir. J.A. G. Jess
Summary: In cooperation with Philips, the company Frontier Design has developed a system that synthesizes a DSP architecture, starting from an algorithmic description of the behaviour of the processor. This synthesis system is called Mistral2. Code generation is part of the process of generating assembly code for processors. This includes resource assignment, register binding and scheduling. A software infrastructure called Facts for doing research in this area has been set up at the Eindhoven University of Technology (TUE). Facts is a code generation tool in which constraint analysis techniques developed at the TUE and the Nat.Lab. play a central role. To make sure that the techniques applied in Facts are able to cope with relevant industrial applications, it is necessary to obtain industrially relevant code generation benchmarks for Facts. Therefore it would be very useful if the Mistral2 frontend could be used to generate these benchmarks. We have designed a software interface from Mistral2 to Facts. This interface is capable of reading binary information created by Mistral2 v1.4 rev2 and transferring the necessary information to a textual input format for Facts. The interface has been tested using a broad range of Mistral2 examples. On the one hand, the functionality of the interface has been visually verified using small examples. The small examples can be divided in four categories: basic blocks, hierarchical examples, loops and conditional constructs. On the other hand, the robustness of the interface has been verified using larger examples. The interface is capable of dealing correctly with basic blocks, hierarchical examples, loops and larger examples. The generated output for examples that contain conditional constructs does not contain the right functionality as present in the Mistral2 input description. Future work includes the improvement of the way the interface deals with conditional constructs. Furthermore, it is very interesting to translate the results of Facts back to Mistral2. In order to do this, the results of Facts must be translated to Mistral2 pragmas that can control the Mistral2 scheduler.
53
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Mstudeerhoogleraar:
R.F.M. van Berkel 9 februari 1999 Mapping of the Super Audio CD Stream Manager on the TriMedia TM1ooo Architecture Prof.dr.ing. J.A.G. Jess
Summary:
Over the last years, development in digital signal processors and general-purpose processors has seen great progress. The latest products are so powerful that they possibly can be used to replace dedicated hardware as programmable logic devices. This is particularly useful in the development of consumer electronics, where the greater flexibility and easier reprogrammability of the high-level language programmable processors can make a difference. A Stream Manager used in a prototype of a disc player for a new audio CD format and that is traditionally implemented in programmable logic devices is used as an example to experiment with the mapping of its functionality to the VLIW multimedia processor TriMedia TM1ooo.
54
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
F.G.P.Peek 31 augustus 1999 Adaptive web page filtering using agent technology Ing. F.C.R. van Doom Pro£dr.ir. J.A.G. Jess
Summary: The World Wide Web is one of the services offered to publish and discover information on the Internet. A notion ofbeing lost in hyperspace is known for most of the Web users. This paper describes this problem and suggests an Intelligent Agent as a web page filtering system which evaluates feedback from the users and uses this feedback to adapt to their interests. It discusses characteristics and behaviour of such an agent, its goals, environment, learning and decision making methods based mostly on some existing solutions. Problems with designing such an agent are described and sources providing useful information for the system are explored. A simple prototype implementation is presented as well.
55
Naam kandidaat: Mstudeerdatum: Mstudeerproject: Begeleiding: Mstudeerhoogleraar:
G.T.G. Volleberg 9 februari 1999 Bugs errors and mistakes in software, source code analysis with procedure summaries. Dr.ir. L. Stok Prof.dr.ing. J.A.G. Jess
Summary: This thesis deals with source code analysis, i.e., finding symptoms of possible errors in the C++ programming language. Programs are becoming larger and quality (fewer errors) needs to be improved. Software developers are having problems archiving those goals in their programs. Static source code analysis tools can help in reducing the number of errors. BEAM (Bugs, Errors And Mistakes) is such a tool. BEAM tries to find symptoms of errors in source code by statically analyzing it and performing data-flow analysis. This in contrast to dynamic analysis which needs to execute the code. Each procedure that is analyzed may depend on other procedures. A Summary that describes the behavior of a procedure would be helpful during analysis. Selective expansion of procedures will then be possible, based on the available Procedure Summary. With Procedure Summaries, BEAM will be able to report more real errors and fewer bogus errors because of the detailed information available. Prior to the actual implementation of the Procedure Summaries, the kind of analysis that BEAM performs is explained in detail. Since there are other static source code analysis tools available on the market a comparison is done to describe the advantages of BEAM. Most tools available check only for stylistic errors, especially the "++" part of C++, e.g., class structures. BEAM checks only very few stylistic errors, and focuses more on the problem of finding executable paths leading to a real error. Besides looking for general symptoms, BEAM is also used for finding application dependent error symptoms. Furthermore, measurements on symptoms found showed that the application dependent part is far more powerful (order of a magnitude) compared to finding general symptoms. Before implementing the several Procedure Summary methods a propagation algorithm was needed which decides (re-) computation for the procedures. This algorithm makes its decisions based on the information available in the CALL-graph. First a post-order list is built to determine the order in which procedures need to be (re-) computed. After a Procedure Summary is computed, the dependencies on other procedures are checked and based on that information the algorithm determines and initiates the (re-) computation of procedures involved. A flexible framework for the Procedure Summaries was desirable. Only few changes in BEAM were needed for BEAM to be able to use the Summaries. Different Procedure Summary algorithms have to work independently and it should be easy adding new algorithms. Also the propagation algorithm was not allowed to depend on the Procedure Summaries. A framework conforming to these restrictions has been implemented for flexible and extendable Procedure Summaries in BEAM. Several Procedure Summary algorithms have been implemented, starting with an algorithm providing merely statistics. Conclusions are drawn based on the results of computations with this algorithm concerning demand driven analysis. The second implementation is pointer dereferencing. At the moment this is the only algorithm actually used by BEAM in its analysis with interesting and useful results. More detailed analysis was needed for global variables. Control-flow analysis to determine the order of assignments and to find MAY/MUST information was used. The last implementation is a pointer aliasing Procedure Summary. This also involved control-flow analysis similar to the one for global variables. The data-flow analysis for figuring out pointers was more complicated. For now only the control-flow aliasing part is implemented and the data-flow part is described here in this report. The core of the Procedure Summary environment for BEAM has been set with the propagation algorithm and several Procedure Summary algorithms. All implementations behave according to expectations and deviations from desired behavior are discussed with their possible solutions. The use of the pointer dereferencing Procedure Summary by BEAM showed impressive results by finding twelve new symptoms. Along with the flexible framework, the basis for the Procedure Summary environment has been realized.
56
LEERSTOEL ELEKfRONISCHE SCHAKELINGEN
57
Naam kandidaat: Afstudeerdatum: Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
C.H.J. van Dinther 31·08-1999 High Frequency linear and non-linear MOSFET modeling lr. E.P. Vandamme, Dr.ir. D. Schreurs, Dr.ir. L.K.J. Vandamme Prof.dr.ir. W.M.G. van Bokhoven
Summary: System-on-chip is the hot topic in circuit design. This creates a large need for a good HF MOSFET model for accurate HF small-signal and non-linear simulations. Two models are researched: The physics based BSIM3 model and an equivalent circuit model. Both models have the possibility to implement the Non-Quasi-Static behaviour which becomes prominent at very high frequencies. The NQS behaviour of a MOSFET is caused by the transition time of the free charge carriers and the inertia of the inversion layer. Modeling starts with measurements and extraction of the model parameters, which is different for both models: - BSIM3 model: After DC and CV measurements on MOSFETs and capacitors with different geometries, the BSIM3 model parameters are extracted. By choosing an appropriate set of devices to measure, the extracted model parameters yield accurate simulations of the DC behaviour. - Equivalent circuit model: A modified equivalent circuit is used to obtain the parameters for the non-linear model. A charge and current model for the gate and drain is deduced, which makes non-linear simulations possible. An improved 3-step de-embedding is introduced for reliable de-embedding of pad parasitics up to GHz.
so
The HF small-signalS-parameter simulations show good results for both models. The simulated figures-ofmerit fT and fmax deviate less than 5 % from the measurements. It is shown that extrapolating the measured curves does not always give a correct extraction of the figures-of-merit'. The results also show that the NQS effects can be neglected for HF suitable transistors, since for these transistors the NQS effects become important at frequencies above the figures-of-merit. For the non-linear measurements and simulations, both single tone and two-tone input are used. The single tone experiments provide information about the non-linear amplification and higher harmonics. The two-tone experiments demonstrate the quality of the model when mixing signals. For this, the intermodulation products are monitored. Very good non-linear simulation results are found for both models. The results in this thesis show that both models are suitable for modeling MOS transistors at GHz frequencies, both linear and non-linear. The BSIM3 model will generally be the best choice since it is more scalable and offers quicker results with the non-linear simulations. The equivalent circuit based model is a welcome alternative when no structures for the extensive DC and CV measurements are available, or when easy and quick adaptations of the model are necessary.
1
E. P. Vandamme, C. van Dinther, D. Schreurs, B. Nauwelaers, G. Badenes and L. Deferm, "Reliable extraction of RF figures-of-merit for MOSFETs." Accepted for Proceedings of the 29th European Solid-State Device Research Conference, ESSDERC'99
58
Naam kandidaat: Afstudeerdatum: Begeleiding: Afstudeerhoogleraar:
W.M. van Spengen 20 april 1999 "Possibilities in failure analysis using a micro-Raman measurement system" Dr.L.VandammefDr. I. De Wolf (IMEC Leuven) Prof.dr.ir. W.M.G. van Bokhoven
Summay In this thesis, we will have a look at some improvements for the micro-Raman spectroscopy measurements and changes in the system setup to perform different kinds of measurements in the micro-electronic failure analysis domain. A brief overview of the different modem failure analysis techniques is given. The advantages and disadvantages of optical failure analysis techniques are discussed. With a micro-Raman spectroscope it is possible to do mechanical stress measurements with a spatial resolution ofless than a micron. To perform measurements on samples with large height differences a very accurate autofocus system is needed. In this work, an auto{ocus system has been developed in such a way that its control electronics can also be used to do a variety of failure analysis measurements. The auto-focus system makes use of the laser light reflected from a sample under the microscope of the micro-Raman spectrometer. This light is focused through a pinhole on a photodetector. The light intensity on this detector is high when the sample is in focus on the microscope and lower if it is out of focus: the intensity distribution as a function of objective position resembles a Lorenz function. The intensity peak is used for the focusing of the objective by the autofocus module, which uses a piezo nanopositioner to move the objective to the right focus position. By curve-fitting the intensity distribution along the focusing axis, we can estimate the top of the intensity distribution with high precision: this is our focal plane, and hence we can calculate the objective to sample distance. If we collect these intensity distributions on different positions on the sample, we might acquire a very precise height map of the sample. The spatial resolution in the focal plane is determined by diffraction of the laser light, and is shown to be substantially improved by digital image enhancement techniques. Ordinary deconvolution techniques are used to prove that the resolution can indeed be improved this way, and blind deconvolution is suggested as a possibly useful alternative to this technique. Instead of monitoring a photodetector, the auto-focus acquisition electronics can also be used to monitor the voltage across a constant current supply. If this current source is connected to the power terminals of an IC, and a focused laser beam is scanned over the surface of the sample, Light Induced Voltage Alteration (LIVA) images are obtained. Temperature Induced Voltage Alteration (TIVA) images could be obtained if a long wavelength laser and a differential preamplifier are used. Finally, Kerr Rotation based Current Measurement (KRCM) is proposed to measure currents flowing in metal lines.
59
CAPACITEITSGROEP ELEKTRISCHE ENERGIETECHNIEK
60
LEERSTOEL ELEKTRISCHE ENERGIETECHNIEK
61
Naam kandidaat: Mstudeerdatum: Mstudeerprojekt:
Begeleiding: Mstudeerhoogleraar:
M. Siffels
Rapport nr.: EG/99/901
7 december 1999 Evaluatie van tariefstructuren voor elektriciteitstransport Jr. R.B.J. Hes (ENW), Jr. H.H. Overbeek (ENW), Jr. W.F.J. Kersten Pro£ir. G.C. Damstra
Samenvattiog: Op het moment van schrijven van dit afstudeerverslag is de Europese energiewereld hevig in beweging door invoering van nieuwe Europese wetgeving ter liberalisering van de energiemarkten. Als gevolg van deze wetgeving zal concurrentie worden geintroduceerd. Om dit te kunnen realiseren zijn wijzigingen in de tariefstructuren voor het transporteren van elektriciteit nodig. Dit afstudeerverslag behandelt mogelijke tariefstructuren voor het transporteren van elektriciteit. Hiertoe wordt begonnen met een algemene verkenning van de kosten die gemoeid zijn met het transporteren van elektriciteit en hoe deze in rekening gebracht kunnen worden. Vervolgens worden beginselen van het tariferen aangegeven; Op welke wijzen kunnen de kosten doorberekend worden naar de gebruikers van het netwerk. De volgende stap is het beschrijven van een rekenmethodiek die gebruikt kan worden om de kosten van de verbindingen, onderstations, etc. zo nauwkeurig mogelijk toe te rekenen aan de gebruikers van het netwerk, waarbij wordt geprobeerd het kostenveroorzakingsprincipe te volgen. Aangegeven wordt hoe deze rekenmethodiek is geimplementeerd en hoe deze zal worden toegepast op een deel van het 150 kV-netwerk (tussen Diemen, Oterleek en Velsen) van Noord West Net. Op basis van de beschreven rekenmethodiek wordt een tariefstructuur beschreven. Daamaast wordt een tariefstructuur geanalyseerd en beschreven zoals deze in Nederland ingevoerd zal gaan worden per I januari 2000. Ook wordt aangegeven met welke kosten van het 150 kV-netwerk er gerekend zal gaan worden en hoe deze kosten hoven tafel zijn gekregen. Beide tariefstructuren worden vervolgens toegepast op een aantal toekomstscenario's; situatie's die in het netwerk in de komende jaren kunnen optreden door het bijbouwen van decentrale opwek ofhet sluiten van een centrale, etc. Per scenario worden de resultaten beschreven en vergeleken. Op basis van deze resultaten en andere bevindingen die in het hierboven beschreven proces zijn gedaan, worden conclusies getrokken en aanbevelingen gedaan. De belangrijkste conclusies zijn dat geen van de twee tariefstructuren wenselijk is voor de huidige situatie en dat er eigenlijk een soort tussenvorm gevonden moet worden die niet te ingewikkeld en bewerkelijk is, vrije toegang tot de energiemarkt en vrije handel in elektriciteit mogelijk maakt en toch het kostenveroorzakingsprincipe benadert, om zodoende betere economische signalen naar de gebruikers af te geven. Verder zijn een aantal bedreigingen voor de netbeheerders geconstateerd als gevolg van de veranderende marktomstandigheden: Veranderende inzet van opwekeenheden, metals gevolg een veranderende vermogenshuishouding van het netwerk. Als gevolg daarvan kunnen onder meer de verliezen gaan toenemen, de betrouwbaarheid van levering gaan teruglopen, transportbeperkingen gaan optreden en problemen met de blindvermogens-huishouding gaan ontstaan. Slotconclusie is dat het nog maar te bezien is, ofliberalisering van de elektriciteitsmarkten wel tot de gewenste prijsdalingen zalleiden.
62
LEERSTOEL HOOGSPANNINGSTECHNIEK & ELEKTROMAGNETISCHE COMPATIBILITEIT
63
Naam kandidaat : Afstudeerdatum : Afstudeerproject : Begeleiding: Afstudeerhoogleraar:
M. Bemmelmans Rapportnr.: EH.99.A.xss april 1999 Issues and Sensors in EMC Pro£ H.C. Reader (University of Stellenbosch, South Africa) Pro£dr.ir. P.C.T. van der Laan 20
Summary: This report describes the results of the work that has been performed for the final project at the Eindhoven University of Technology in the Netherlands. The work was carried out at the University of Stellenbosch, South Africa. Three issues have been dealt with during the total project. First of all, chapter 1 describes the performed experiments to illustrate some basic features in electromagnetic compatibility (EM C). Coupling mechanisms between cables are studied and the experimental results are explained by theoretical considerations. The experiments are carried out using different set-ups to illustrate the importance of the layout of every set-up. A difference in the layout of the measurement equipment's cables can cause different results. Furthermore, the behaviour of a digital circuit with respect to the produced interference is described. The noise caused by the fast rise and fall times of the voltage and the current effects measurements on the circuit, as well as the neighbouring circuitry. In chapter 2 the electromagnetic interference (EMI) on the output of a switched mode power supply (SMPS) is investigated using a Bersier probe and a Rogowski coil as measurement tools. The characteristics of these devices are determined and a comparison is made between the two. Both sensors show the presence of an induced signal on the output leads and that it is possible to reduce this signal. It is illustrated that one cannot make a comparison between the Bersier probe and the Rogowski coil directly without taking precautions in the calibration process and during the experimental phase. Finally, a magnetic field probe to measure surface current distributions, is studied, constructed and tested, as described in chapter 3· The device operates according to the square loop magnetic field probe. By comparing theoretical and experimental results it can be shown that the sensor is able to perform measurements in direction and in phase. A calibration factor has to be taken into account. The probe can make stable and reproducible measurements, provided that the right precautions are taken. High-quality cables are necessary for this particular device and an accurate system to move the probe at the same height along the surface is desirable.
64
Naam kandidaat : Afstudeerdatum : Afstudeerproject: Begeleiding: Afstudeerhoogleraar:
K.P.M. Gommers Rapport nr. EH.gg.A.rs6 15-06-1999 Return Voltage-metingen aan XLPE-isolatie van middenspannings-kabels Dr. P.A.A.F. Wouters Prof.dr.ir. P.C.T. van der Laan
Summary: "Return Voltage-metingen aan XLPE-isolatie van middenspanningskabels" In de afgelopen twee decennia is onderzoek gedaan naar verscheidene waterboomdiagnostieken voor XLPE-kabels, zowel in het frequentiedomein, als in het tijddomein. Een methode in het tijddomein, die gebruik maakt van "Return Voltage Measurements" bleefhierbij onderbelicht. In opdracht van KEMA te Arnhem is het verband onderzocht tussen de aantasting van XLPE-kabels door waterbomen en de "Return Voltage" (RV). Door KEMA zijn vijf kabels ter beschikking gesteld. Hiervan was de conditie v66r het onderzoek onbekend. Ter activering van de waterbomen (vooral de vented trees) zijn de kabels via de centrale geleider van binnen natgemaakt. De kabel wordt opgeladen met een gelijkspanning van rkV en vervolgens kortgesloten, beide gedurende een bepaalde tijd. Hiema wordt de RV vastgelegd. In de (al dan niet aangetaste) kabel bevindt zich een spectrum aan dielektrische processen met karakteristieke tijdconstanten. Door de keuze van de laad- en de sluittijd wordt als het ware "ingezoomd" op een deel van dat spectrum. De laad- en sluittijden zijn systematisch gevarieerd (laadtijden: 5s, 5os, soos; sluittijden o,ss, ss, sos, soos en sooos). De Return Voltage is gemeten met een hoogohmige elektrometer. Bij meting van de RV kunnen de verschillende processen, afhankelijk van de laad- en sluittijd, met een effectieve tijdconstante beschreven worden. Met dit uitgangspunt zijn ter illustratie twee modellen gepresenteerd. Een dielektrisch model geeft inzicht in het macroscopisch gedrag van de aangetaste XLPE-isolatie. Voor de dielektrische responsiefunctie is hier gekozen voor het Debyemodel. Het elektrische model, waarin een waterboom voorgesteld wordt door een hoge weerstand en een condensator, legt meer de nadruk op het lokale karakter van een waterboom. Voor beide modellen is de RV een som van twee exponentiele functies met twee tijdconstanten en twee amplitudes, gelijk van grootte maar tegengesteld van teken. De eerste, korte, tijdconstante geeft informatie over de dielektrische processen in de kabel. De tweede wordt veroorzaakt door de meet- en lekimpedantie van de meetopstelling, samen met de kabelcapaciteit. De gemeten RV's zijn gefit aan deze functie. Een Iichte systematische afwijking is het gevolg van de aanwezigheid van een spectrum aan processen, terwijl in de gefitte functie slechts een proces is vertegenwoordigd. Uit de meetresultaten blijkt, dat de dominante processen een tijdconstante in de orde van een minuut hebben. De amplitudes van de Return Voltages zijn gecorreleerd aan het gehalte en de maximale lengte van de vented trees aan de geleiderzijde. De vented trees aan de aardmantelzijde hadden vermoedelijk geen invloed, doordat ze niet nat zijn gemaakt. De bowtie-trees in enkele kabels lijken geen bijdrage te leveren aan de Return Voltage.
65