The Traveling Salesman Problem Due Tues. March 2 at 09:15 AM The Traveling Salesman Problem (TSP) is a problem in combinatorial optimization studied in operations research and theoretical computer science. Given a list of cities and their pairwise distances, the task is to find a shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, a large number of heuristics and exact methods are known, so that some instances with tens of thousands of cities can be solved. The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as genome sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder. In the theory of computational complexity, the decision version of TSP belongs to the class of NP-complete problems. Thus, it is assumed that there is no efficient algorithm for solving TSP problems. In other words, it is likely that the worst case running time for any algorithm for TSP increases exponentially with the number of cities, so even some instances with only hundreds of cities will take many CPU years to solve exactly. In this project, you will consider three problems involving 6, 60, and 260 imaginary cities that are randomly distributed across an imaginary country of size 1000 x 1000 km. Your mission for each problem is to identify the shortest possible tour that visits each city exactly once. For each problem you are to turn in (A) a list of cities (numbered 0-5, 0-59, 0-259 respectively) corresponding to the best tour that you found; (B) the total distance of that tour; and (C) a copy of your computer code (.cpp file) via Canvas. Item A should also be submitted in electronic form (as a .txt or .xls file) via Canvas. A .txt and .xls file with the input data for each problem, and an example of a .txt file showing a possible answer to the 6-city problem, have been emailed to you. Ten points (10%) of extra credit will be added to your grade if you identify the best answer to the 60- or 260-city problem among all students in the class. Simulated annealing (SA) is a generic probabilistic meta-heuristic for attacking global optimization problems in operations research and applied mathematics, namely locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete (e.g., all tours that visit a given set of cities), when it is not too difficult to identify a feasible solution to the problem, and when the objective function is nonlinear. For certain problems, simulated annealing may be more effective than both exhaustive enumeration and math programming—provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding “more optimal” configurations with lower internal energy than the initial one. By analogy with this physical process, each step of the SA algorithm uses the current solution to randomly generate a “neighboring solution” or “neighbor.” The neighbor replaces the current solution with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the process almost always accepts the neighboring solution when T is large, but increasingly becomes more picky as T goes to zero. The allowance for "uphill" moves from better solutions to worse solutions saves the method from becoming stuck at local minima—which are the bane of greedier methods. The method was independently described by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi in 1983, and by V. Černý in 1985. The method is an adaptation of the Metropolis-Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, invented by N. Metropolis et al. in 1953. SA procedure (for minimization): 1. Select initial temperature T (a real number > 0) and cooling rate r (0 < r < 1). 2. Generate initial feasible solution S for the problem (using a construction heuristic) 3. While termination criterion not yet met, do a. Pick a random neighbor S’ of S b. Let Δ = f(S’) – f(S) = change in objective value when going from S to S’ c. If Δ ≤ 0, set S = S’ d. If Δ > 0, set S = S’ with probability e -Δ/T e. Set T = rT (reduce the temperature) 4. Display the best solution found Note that, the larger Δ is, the less the chance of moving to a neighbor worse by Δ. Also, as the temperature T decreases, the chances of moving to a worse neighbor decrease Issues needing to be addressed: • What should the initial temperature T and cooling rate r be? • What construction heuristic is used to create the initial feasible solution? • What heuristic is used to (randomly) generate feasible neighbors from a given solution? Good luck!
Get Free Quote!
320 Experts Online