McGill University

School of Computer Science











COMP 765 Spatial Representation and Mobile Robotics Project


Local Path Planning Using

Virtual Potential Field






By Hani Safadi













In this report we discuss our implementation of a local path planning algorithm based on virtual potential field described in [1]. The algorithm uses virtual forces to avoid being trapped in a local minimum. Simulation and experiments are performed, and compared to the results presented in the paper. They show good performance and ability to avoid the local minimum problem in most of the cases.




mobile robots, navigation, path planning, local path planning, virtual force field, virtual potential field.


1. Introduction


Autonomous navigation of a robot relies on the ability of the robot to achieve its goal, avoiding the obstacles in the environment. In some cases the robot has a complete knowledge of its environment, and plans its movement based on it. But, in general, the robot only has an idea about the goal, and should reach it using its sensors to gather information about the environment.


Hierarchical systems decompose the control process by function. Low-level processes provide simple functions that are grouped together by higher-level processes in order to provide overall vehicle control [2].

We can think of high level processing as planning level where high level plans are generated, and low level processing as reactive control where the robot needs to avoid the obstacle sensed by the sensors. This is called local path planning.


Local path planning, should be performed in real time, and it takes priority over the high level plans. Therefore, it is some time called real time obstacle avoidance.


One of the local path planning methods, is the potential field method [3]. It is an attractive method because of its elegance and simplicity [1]. However, using this method the robot can be easily fall in a local minimum. Therefore, additional efforts are needed to avoid this situation.


This report is organized as the following:

In section 2, we will introduce the potential field method, showing how it can be trapped to a local minimum. Then in section 3, we will describe the modification introduced in the paper [1], to avoid the local minimum. In section 4 will will present our implementation of the algorithm. Section 5 will describe the test setup and the obtained results, comparing them to the results described in the original paper. Finally, section 6 will conclude this report.


2. The Potential Field Method


The idea of a potential field is taken from nature. For instance a charged particle navigating a magnetic field, or a small ball rolling in a hill. The idea is that depending on the strength of the field, or the slope of the hill, the particle, or the ball can arrive to the source of the field, the magnet, or the valley in this example.

In robotics, we can simulate the same effect, by creating an artificial potential field that will attract the robot to the goal.

By designing adequate potential field, we can make the robot exhibit simple behaviors.

For instance, lets assume that there is no obstacle in the environment, and that the robot should seek this goal. To do that in conventional planning, one should calculate the relative position of the robot to the goal, and then apply the suitable forces that will drive the robot to the goal.

In potential field approach, we simple create an attractive filed going inside the goal. The potential field is defined across the entire free space, and in each time step, we calculate the potential filed at the robot position, and then calculate the induced force by this field. The robot then should move according to this force.

This figure illustrates this concept [4]:













We can also define another behavior, that allows the robot to avoid obstacles. We simply make each obstacle generate a repulsive field around it. If the robot approaches the obstacle, a repulsive force will act on it, pushing it away from the obstacle.














The two behaviors, seeking and avoiding, can be combined by combining the two potential fields, the robot then can follow the force induced by the new filed to reach the goal while avoiding the obstacle:
















In mathematical terms, the overall potential field is:

U(q) = Ugoal(q) + Uobstacles(q)

and the induced force is:

F = -U(q) = ( U / x, U /y)

Robot motion can be derived by taking small steps driven by the local force [2].


Many potential field functions are studied in the literature. Typically Ugoal is defined as a parabolic attractor [2]:

Ugoal = dist(q, goal)2

where dist is the Euclidean distance between the state q and the goal.

Uobstacles is modeled as a potential barrier that rises to infinity when the robot approaches the obstacle [2]:

Uobstacles = dist(q, obstacle)-1



where dist is the Euclidean distance between the robot in state q and the closest point on the obstacle. The repulsive force is computed with respect to either the nearest obstacle or summed over all the obstacles in the environment.

(a) attractive field, (b) repulsive field, (c) the sum




In this work, a more sophisticate repulsive force will be used:

Uobstacles = (1/dist(q, obstacle) 1/p)2 dist(q, goal)2 if dist(q, obstacle) p

0 if dist(q, obstacle) > p

where p is a positive constant denoting the distance of influence of the obstacle.


Potential field method was developed as an online collision avoidance approach, applicable when the robot does not have a prior model of the obstacle, but senses then during motion execution [1]. And it is obvious that its reliance on local information can trap it in a local minimum.

An example of that is a U shape trap, where the robot will be attracted toward the trap, but will never be able to reach the goal [2]:

Another example is when the robot faces a long wall. The repulsive forces will not allow the robot to reach the sides of the wall, and hence, it will not reach the goal.


3. Avoiding the Local Minimum


Several methods has be suggested to deal with the local minimum phenomenon in potential filed m. One idea is to avoid the local minimum by incorporating the potential field with the high-level planner, so that the robot can use the information derived from its sensor, but still plan globally [2].

Another set of approaches are to allow the robot to go into local minimum state, but then try to fix this situation by:

l        Backtracking from the local Minimum and then using another strategy to avoid the local minimum.

l        Doing some random movements, with the hope that these movements will help escaping the local minimum.

l        Using a procedural planner, such as wall following, or using one of the bug algorithms to avoid the obstacle where the local Minimum is.

l        Using more complex potential fields that are guaranteed to be local minimum free, like harmonic potential fields [2].

l        Changing the potential field properties of the position of the local minimum. So that if the robot gets repelled from it gradually.


All these approaches rely on the fact that the robot can discover that it is trapped, which is also an ill-defined problem.


The method used in this work, is similar to the last point, where the properties of the potential fields are changed.

When the robot senses that it is trapped, a new force called virtual free space force, (or virtual force simply), will be applied to it.

The virtual force is proportional to the amount of free space around the robot, and it helps pulling the robot outside of the local minimum area. After the virtual force is applied, the robot will be dragged outside of the local minimum and it will begin moving again using the potential field planner. However, it is now unlikely that it will be trapped again to the same local minimum.


More formally, the virtual force Ff is proportional to the free space around the robot. It is calculated by:


Ff = Fcf (cos(θ) ex + sin(θ)ey) [1]

where θ is robot to free space orientation and Fcf is force constant

Unfortunately, the paper does not discuss in detail what robot to free space . orientation means, nor does it include any figure explaining ex, and ey. However, it can be inferred that will be toward the free space around the robot, in an opposite direction of the obstacle. Its two components on x and y can be described by cos(θ) ex and sin(θ)ey


The total force will be the sum of the attractive force (derived from the goal potential field), the repulsive force (derived from the obstacle fields), and the virtual force:

F = Fatt + Frep + Ff


Finally, to detect that the robot is trapped, we use a position estimator to estimate the current position of the robot (open-loop). If the current position does not change for a considerable amount of time (a predefined threshold), the virtual force is generated to pull out the robot from the local minimum.


Note that, the robot speed is decreased once it approaches an obstacle, (due to the repulsive force). Therefore, the robot cannot stay in a non-local minimum place for a long time, unless if this is planned by the high-level planner.


4. Implementation


An implementation of the potential field algorithm, and the potential field with virtual force algorithm is developed using the Java programming language, we will outline it here. The full code is in the appendix with this report.


One advantage of t he potential field method is that it is easy to implement.

We need a representation for the robot, the obstacles, and the world.

Then at each time step, the potential field is evaluated at the position of the robot, and the robot is moved using the force induced by the potential field.


The obstacle is simply a particle that has a position, a size, and a charge, in the implementation it also provides simple methods to report the distance from the robot.

09     double diam;
10     double mass;
11     Point p;
12     double charge;
13     public Obstacle(Point p, double charge, double diam) {
18     public Obstacle(Point p) {

24     public double distanceSq(Robot r) {

28     public double distance(Robot r) {


The robot has a position, size, and the ability to sense the environment using methods that simulate approximately the sonar sensor.

007     double x, y;
008     double vx, vy;
009     double dt;
010     double m;
011     double fMax;
012     ArrayList obstacles;
013     public double diam;
014     Obstacle target;
015     boolean virtualforce = false;
016     public Robot(Point p, ArrayList obstacles, double dt, double m, double fMax, double 


028     public void updatePosition() {
098     boolean range(Obstacle ob, double range) {

106     double addNoise(double x, double mean, double stddev) {


The world is a set of obstacles, and the robot. It also has the clock of the simulation where at each time step, the robot senses the obstacle around, plan its movement using the potential field (with or without) virtual force. And finally, perform the action by moving its self using the potential force generated by the potential field.

038     ArrayList obstacles = new ArrayList();
041       obstacles.add(new Obstacle(new Point(0, 0), +100, 3));
044     Robot r = new Robot(new Point(800, 600), obstacles,  0.1, 40, 4000, 15);
045     Thread t = new Thread();


Here is the class diagram of the project:


The interface of the program, allows the user to create the environment adding obstacle. The running the simulation, can controlling the addition of the virtual force.

The user can add obstacles, turn on and off the virtual force, even when the simulation is running. He can also pause and resume the simulation.

The graphical user interface will show the robot and its path during the simulation:


5. Experiments & Results


The potential field method with virtual force correction, is simulated and compared to the original potential field method. The paper [1] presents two comparisons between the two methods, proving the feasibility of the virtual force modification.

We also performed other experiments with our implementation, two of them were replicating the examples shown in paper, and the others are new setups to verify the feasibility of the modification in other situations.

The world is a 30x30 cm. The robot is treated as a particle with radius 0.7 cm, the obstacles has radii of 1 cm. The robot uses sonar as a sensor. The simulated sonar has a range of 5cm, and a beam width of 120 degrees [1].


The following experiment compares the original potential field (left) to the modified one (right), in the U-trap example: [1]


The following experiment compares the original potential field (left) to the modified one (right), in the wall-trap example: [1]


The following experiment shows that the virtual force modification, manages to handle navigating in complex environments: [1]


The following experiment duplicates the U-trap situation using our implementation:


The following experiment duplicates the wall-trap situation using our implementation:

we notice that both the two methods managed to avoid the wall, although the virtual force method exhibits a more interesting behavior.

The reason might be that the wall is not long enough, we redid the experiment with a longer wall:

Now the robot is trapped, but with the addition of the virtual force it can get out of the local minimum.


Here is an experiment in more complex environments:


We notice that virtual fore pulls the robots away from a set of obstacle.



Now, we will perform more exotic experiments, to compare the behavior both approaches (left without virtual force, right with it):



From these examples, we notice that the virtual force robot has the tendency to stay away from obstacles, even if there is a path near or through them. This is due to the fact that there is no deterministic way to detect the local minimum the heuristic used may not be accurate in several cases. This shows that although using the virtual force will help escape from the local minimum in most of the cases, the generated solution is not optimal, in term of path length. When no local minimum exists the original potential field will give a better performance than the virtual force method.


6. Final Remarks and Conclusions


The paper [1] proposed a virtual force method for avoiding local minimum in potential field methods. Potential field is used as basic platform for the motion planning since it has the advantages of simplicity, real-time computation. To get rid of the local minimum in the presence of obstacles, the paper introduced the virtual force concept. The virtual force has been created respectively from different condition of local minimum [1].

While this approach manages to solve the problem in many cases, it also has several short comings:

1.      Adding the force component seems to be a rough heuristic, if not a very ad-hoc practice. Defining the force strength and orientation is also more as art than a science.

2.      Adding the force can also make the robot exhibit strange behavior, like begin obstacle-phobic. A robot that always flees from obstacles, even if they do not block its way completely is kind of strange.

3.      The path generated by the virtual force method is not optimal, in fact it can be far from optimal in certain case, due to oscillation between obstacles and free space. (see the wall example)

4.      Adding the virtual force, relies on the ability of the robot to detect that it is in a local minimum. This is in practice very difficult, since the robot will not land exactly on the local minimum [2]. And because determining the minimum also relies on heuristics that also degrades the robustness of the method.

5.      The local minimum is not guaranteed to be avoided, this might happen if the force strength and orientation is not suitable with the environment dimensions.

6.      Adding this force may also make situation that are solvable by the original potential field, unfeasible. For instance, when the path to the goal is from a small space within obstacle.


In my opinion, using heuristic can be beneficial sometimes, but it is not guaranteed to be optimal. A systematic approach is more suitable to solve this on line planning problems, because it can be analyzed formally without throwing all this kind of constants and heuristics. For instance, if the robot uses the bug algorithm when it hits an obstacle, then it can avoid the local minimum, and the performance will not suffer from the problems 1,2,5,6.


7. Acknowledgments


We would like to thank the people whom he had very interesting discussions. Most notably, professor Gregory Dudek from McGill university who was always keen to discuss and help during its stage, and many other students an colleagues in COMP 765 class, at McGill University for interesting discussions and ideas.


8. References


  1. Ding Fu-guang; Jiao Peng; Bian Xin-qian; Wang Hong-jian, AUV local path planning based on virtual potential field, Mechatronics and Automation, 2005 IEEE International Conference, Vol.4, Iss., 2005 Pages:1711-1716 Vol. 4
  2. Gregory Dudek and Michael Jenkin,Computational principles of mobile robotics, Cambridge University Press, Cambridge, 2000, ISBN: 0-521-56021-7.
  3. O. Khatib, Real-time Obstacle Avoidance for Manipulators and Mobile Robots, Proceedings of the IEEE International Conference on Robotics & Automation, pp.500-505, 1985.
  4. Michael A. Goodrich, Potential Fields Tutorial,
    8. Appendix I, Source Code:


01 package virtualforcerobot;
03 import java.awt.*;
05 public class Obstacle {
06   static double DIAM = 10;
07   static double CHARGE = -1;
08   static double MASS = 10;
09     double diam;
10     double mass;
11     Point p;
12     double charge;
13     public Obstacle(Point p, double charge, double diam) {
14         this.diam = diam;
15         this.p = p;
16         this.charge = charge;
17     }
18     public Obstacle(Point p) {
19       this.diam = DIAM;
20       this.charge = CHARGE;
21       this.mass = MASS;
22       this.p = p;
23     }
24     public double distanceSq(Robot r) {
25         double d = distance(r);
26         return d * d;
27     }
28     public double distance(Robot r) {
29         double d = p.distance(r.x, r.y) - (diam + r.diam) / 2;
30         return d > 0?  d: 0.0000001;
31     }
32 }[1] [2] 




[4] [5] [6] [7] [8] [9] [10] 001 package virtualforcerobot;
003 import java.awt.*;
004 import java.util.*;
006 public class Robot {
007     double x, y;
008     double vx, vy;
009     double dt;
010     double m;
011     double fMax;
012     ArrayList obstacles;
013     public double diam;
014     Obstacle target;
015     boolean virtualforce = false;
016     public Robot(Point p, ArrayList obstacles, double dt, double m, double fMax, double diam) {
017         this.diam = diam;
018         this.fMax = fMax;
019         this.m = m;
020         this.dt = dt;
021         vx = vy = 0;
022         this.x = p.x;
023         this.y = p.y;
024         this.obstacles = obstacles;
025 = (Obstacle) obstacles.get(0);
026     }
028     public void updatePosition() {
029         double dirX = 0, dirY = 0;
030         double minS = 200;
031         Iterator iter = obstacles.iterator();
033         while (iter.hasNext()) {
034             Obstacle ob = (Obstacle);
035             double distSq =ob.distanceSq(this);
036             if (distSq < 1)
037                 Math.sin(1);
038             double dx = ob.charge * (ob.p.x - x) / distSq;
039             double dy = ob.charge * (ob.p.y - y) / distSq;
040             dirX += dx;
041             dirY += dy;
042         }
044         double norm = Math.sqrt(dirX*dirX+dirY*dirY);
045         dirX = dirX / norm;
046         dirY = dirY / norm;
048         iter = obstacles.iterator();
049         while (iter.hasNext()) {
050             Obstacle ob = (Obstacle);
051             if(!range(ob, 1200)) continue;
052             double distSq =ob.distanceSq(this);
053             double dx = (ob.p.x - x);
054             double dy = (ob.p.y - y);
055             //add normal noise to simulate the sonar effect
056             dx = addNoise(dx, 0, 1);
057             dy = addNoise(dy, 0 ,1);
058             double safety = distSq / ((dx * dirX+dy*dirY));
059             if ((safety > 0) &&(safety < minS))
060                 minS = safety;
061         }
062         if (minS < 5) {
063             double oc = target.charge;
064             target.charge*=minS/5;
065             System.out.println(oc +" DOWN TO "+ target.charge);
066         }
068         if (minS > 100) {
069             double oc = target.charge;
070             target.charge*=minS/100;
071             System.out.println(oc +" UP TO "+ target.charge);
072         }
075         double vtNorm = minS/2;
076         double vtx = vtNorm * dirX;
077         double vty = vtNorm * dirY;
078         double fx = m * (vtx - vx) / dt;
079         double fy = m * (vty - vy) / dt;
080         double fNorm =  Math.sqrt(fx * fx + fy * fy);
081         if (fNorm > fMax ) {
082             fx *=  fMax / fNorm;
083             fy *=  fMax / fNorm;
084         }
085         vx += (fx * dt) / m;
086         vy += (fy * dt) / m;
087         //virtual force component        
088         if(virtualforce && (target.charge < 1000) && (x > 25) && (y > 25)) {
089           System.out.println("Virtual Force");
090           target.charge*=minS/100;
092             vx = vx + 5;
093         }
094         x += vx * dt;
095         y += vy * dt;
096     }
098     boolean range(Obstacle ob, double range) {
099        double dist =ob.distanceSq(this);
100        if(dist < range)
101          return true;
102        else 
103          return false;
104     }
106     double addNoise(double x, double mean, double stddev) {
107       Random r = new Random();
108       double noise = stddev*r.nextGaussian() + mean;
109       return x + noise;
110     }
111 }[11] [12] 




[14] [15] [16] [17] [18] [19] [20] 001 
002 package virtualforcerobot;
004 import javax.swing.*;
005 import java.awt.*;
006 import java.awt.event.*;
007 import java.util.*;
010 public class FrameMain extends JFrame implements ActionListener {
011     JPanel contentPane;
012     BorderLayout borderLayout1 = new BorderLayout();
013     JCheckBox virtualforce;
014     JPanel commands = new JPanel();
015     Set<Point> s = new HashSet<Point> ();
016     JPanel pnlDraw = new JPanel() {
018         public void paint(Graphics g) {          
019             super.paint(g);
020             g.setColor(Color.BLACK);
021             s.add(new Point((int) r.x, (int) r.y));
022             Iterator iter = obstacles.iterator();
023             while (iter.hasNext()) {
024                 Obstacle ob = (Obstacle);
025                 g.fillArc((int)(ob.p.x - ob.diam/2), (int)(ob.p.y - ob.diam/2), (int) ob.diam, (int) ob.diam, 0, 360);
026             }
027             g.setColor(Color.BLUE);
028             for(Point p : s) {
029               g.fillArc((int)(p.x - r.diam/4), (int)(p.y - r.diam/4), (int)r.diam/2, (int)r.diam/2, 0, 360);
030             }
032             g.setColor(Color.RED);
033             g.fillArc((int)(r.x - r.diam/2), (int)(r.y - r.diam/2), (int)r.diam, (int)r.diam, 0, 360);
034             g.drawLine((int)r.x, (int)r.y, (int)(r.x+r.vx), (int)(r.y+r.vy));
035         }
036     };
038     ArrayList obstacles = new ArrayList();
040     {
041       obstacles.add(new Obstacle(new Point(0, 0), +100, 3));
042     }
044     Robot r = new Robot(new Point(800, 600), obstacles,  0.1, 40, 4000, 15);
045     Thread t = new Thread();
046     public FrameMain() {
047         try {
048             setDefaultCloseOperation(EXIT_ON_CLOSE);
049             jbInit();
050         catch (Exception exception) {
051             exception.printStackTrace();
052         }
054     }
056     /**
057      * Component initialization.
058      *
059      @throws java.lang.Exception
060      */
061     private void jbInit() throws Exception {
062         contentPane = (JPanel) getContentPane();
063         contentPane.setLayout(borderLayout1);
064         setSize(new Dimension(900, 700));
065         setTitle("Fuzzy Roboter");
066         pnlDraw.addMouseListener(new FrameMain_pnlDraw_mouseAdapter(this));
067         JButton bnew = new JButton("New");
068         bnew.setActionCommand("new");
069         bnew.addActionListener(this);
070         commands.add(bnew);
071         JButton bstart = new JButton("Start");
072         bstart.setActionCommand("start");
073         bstart.addActionListener(this);
074         commands.add(bstart);
075         JButton bpause = new JButton("Pause");
076         bpause.setActionCommand("pause");
077         bpause.addActionListener(this);        
078         commands.add(bpause);
079         JButton bresume = new JButton("Resume");
080         bresume.setActionCommand("resume");
081         bresume.addActionListener(this);        
082         commands.add(bresume);        
083         virtualforce = new JCheckBox("Virtual Force");
084         virtualforce.setActionCommand("vf");
085         virtualforce.addActionListener(this);
086         commands.add(virtualforce);
087         contentPane.add(pnlDraw, java.awt.BorderLayout.CENTER);
088         contentPane.add(commands, java.awt.BorderLayout.NORTH);
089     }
091     public void pnlDraw_mouseReleased(MouseEvent e) {
092       obstacles.add(new Obstacle(e.getPoint()));
093         pnlDraw.repaint();
094     }
096   public void actionPerformed(ActionEvent e) {
097     if (e.getActionCommand().equals("start")) {      
098           t = new Thread(new Runnable() {
099               public void run() {
100                   while ((r.x > 1) && (r.y > 1)) {
101                       r.updatePosition();
102                       pnlDraw.repaint();
103                       try {
104                           Thread.sleep(100);
105                       catch (InterruptedException ex) {
106                           System.out.println(ex);
107                       }
108                   }
109               }
110           });  
111           t.start();
112     }
113     else if (e.getActionCommand().equals("new")){
114       r.x = 0; r.y = 0;
115           r = new Robot(new Point(800, 600), obstacles,  0.1, 40, 4000, 15);
116           s = new HashSet<Point> ();
117         t = new Thread();  
118         r.virtualforce = virtualforce.isSelected();
119         pnlDraw.repaint();        
120     else if (e.getActionCommand().equals("pause")){
121       t.suspend();
122     else if (e.getActionCommand().equals("resume")){
123       t.resume();
124     }
125     else if (e.getActionCommand().equals("vf")) {
126       System.out.println("VIRTUAL FORCE!!");
127       r.virtualforce = !r.virtualforce;
128     }
129   }
131 }
134 class FrameMain_pnlDraw_mouseAdapter extends MouseAdapter {
135     private FrameMain adaptee;
136     FrameMain_pnlDraw_mouseAdapter(FrameMain adaptee) {
137         this.adaptee = adaptee;
138     }
140     public void mouseReleased(MouseEvent e) {
141         adaptee.pnlDraw_mouseReleased(e);
142     }
143 }[21] [22] 


[24] [25] [26] [27] [28] [29] [30] 01 package virtualforcerobot;
03 import java.awt.Toolkit;
04 import javax.swing.SwingUtilities;
05 import javax.swing.UIManager;
06 import java.awt.Dimension;
08 public class AppMain {
09     boolean packFrame = false;
11     /**
12      * Construct and show the application.
13      */
14     public AppMain() {
15         FrameMain frame = new FrameMain();
16         // Validate frames that have preset sizes
17         // Pack frames that have useful preferred size info, e.g. from their layout
18         if (packFrame) {
19             frame.pack();
20         else {
21             frame.validate();
22         }
24         // Center the window
25         Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
26         Dimension frameSize = frame.getSize();
27         if (frameSize.height > screenSize.height) {
28             frameSize.height = screenSize.height;
29         }
30         if (frameSize.width > screenSize.width) {
31             frameSize.width = screenSize.width;
32         }
33         frame.setLocation((screenSize.width - frameSize.width) / 2,
34                           (screenSize.height - frameSize.height) / 2);
35         frame.setVisible(true);
36     }
38     /**
39      * Application entry point.
40      *
41      @param args String[]
42      */
43     public static void main(String[] args) {
44         SwingUtilities.invokeLater(new Runnable() {
45             public void run() {
46                 try {
47                     UIManager.setLookAndFeel(UIManager.
48                                              getSystemLookAndFeelClassName());
49                 catch (Exception exception) {
50                     exception.printStackTrace();
51                 }
53                 new AppMain();
54             }
55         });
56     }
57 }[31] [32] 





 [1]end source code

 [2]start Java2Html link

 [3]end Java2Html link

 [4]= END of automatically generated HTML code =



 [7]= Java Sourcecode to HTML automatically converted code =

 [8]= Java2Html Converter 5.0 [2006-02-26] by Markus Gebhard =

 [9]= Further information: =

 [10]start source code

 [11]end source code

 [12]start Java2Html link

 [13]end Java2Html link

 [14]= END of automatically generated HTML code =



 [17]= Java Sourcecode to HTML automatically converted code =

 [18]= Java2Html Converter 5.0 [2006-02-26] by Markus Gebhard =

 [19]= Further information: =

 [20]start source code

 [21]end source code

 [22]start Java2Html link

 [23]end Java2Html link

 [24]= END of automatically generated HTML code =



 [27]= Java Sourcecode to HTML automatically converted code =

 [28]= Java2Html Converter 5.0 [2006-02-26] by Markus Gebhard =

 [29]= Further information: =

 [30]start source code

 [31]end source code

 [32]start Java2Html link