Partial List of Abstracts

J28
Y. N. Lien, Y. W. Ma, and B. W. Wah, ``Transformation of the Generalized Traveling-Salesman Problem into the Standard Traveling-Salesman Problem,'' Information Sciences, Elsevier Science Pub. Co., Inc., New York, NY, vol. 74, nos. 1 & 2, Oct. 15, 1993, pp. 177-189.

In the generalized traveling-salesman problem (GTSP), we are given a set of cities that are grouped into possibly intersecting clusters. The objective is to find a closed path of minimum cost that visits at least one city in each cluster. Given an instance G of the GTSP, we first transform G into another instance G' of the GTSP in which all the clusters are nonintersecting, and then transform G' into an instance G" of the standard traveling-salesman problem (TSP). We show that any feasible solution of the TSP instance G" can be transformed into a feasible solution of the GTSP instance G of no greater cost, and that any optimal solution of the TSP instance G" can be transformed into an optimal solution of the GTSP instance G.

J38
K. M. Baumgartner and B. W. Wah ``Computer Scheduling Algorithms: Past, Present, and Future,'' Information Sciences, Elsevier Science Pub. Co., vol. 57 & 58, Sept.-Dec. 1991, pp. 319-345.

Efficient scheduling techniques of computing resources are essential for achieving satisfactory performance for users as computer systems and their applications become more complex. In this paper, we survey research on scheduling algorithms, review previous classifications of scheduling problems, and present a broader classification scheme. Using a uniform terminology for scheduling strategies and the new classification scheme, previous work on scheduling strategies is reviewed, and trends in scheduling research are identified. Finally, a methodology for developing scheduling strategies is presented.

J41
A. Ieumwananonthachai, A. N. Aizawa, S. R. Schwartz, B. W. Wah, and J. C. Yan, ``Intelligent Process Mapping through Systematic Improvement of Heuristics,'' J. of Parallel and Distributed Computing, Academic Press, vol. 15, June 1992, pp. 118-142.

In this paper, we present the design of a system for automatically learning and evaluating new heuristic methods that can be used to map a set of communicating processes on a network of computers. Our learning system is based on testing a population of competing heuristic methods within a fixed time constraint. We develop and analyze various resource scheduling strategies based on a statistical model that trades between the number of new heuristic methods considered and the amount of testing performed on each. We implement a prototype learning system (Teacher 4.1) for learning new heuristic methods used in Post-Game Analysis, a system that iteratively generates and refines mappings of a set of communicating processes on a network of computers. Our performance results show that a significant improvement can be obtained by a systematic exploration of the space of possible heuristic methods.

J42
B. W. Wah, A. Ieumwananonthachai, L.-C. Chu, and A. N. Aizawa, ``Genetics-Based Learning of New Heuristics: Rational Scheduling of Experiments and Generalization,'' IEEE Trans. on Knowledge and Data Engineering, vol. 7, no. 5 (Oct. 1995), pp. 763-785.

In this paper we present new methods for the automated learning of heuristics in knowledge-lean applications and for finding heuristics that can be generalized to unlearned domains. These applications lack domain knowledge for credit assignment; hence, operators for composing new heuristics are generally model-free, domain independent, and syntactic in nature. The operators we have used are genetics-based; examples of which include mutation and cross-over. Learning is based on a generate-and-test paradigm that maintains a pool of competing heuristics, tests them to a limited extent, creates new ones from those that perform well in the past, and prunes poor ones from the pool. We have studied three important issues in learning better heuristics: (a) anomalies in performance evaluation, (b) rational scheduling of limited computational resources in testing candidate heuristics in single-objective as well as multi-objective learning, and (c) finding heuristics that can be generalized to unlearned domains. We show experimental results in learning better heuristics for (a) process placement for distributed-memory multicomputers, (b) node decomposition in a branch-and-bound search, (c) generation of test patterns in VLSI circuit testing, and (d) VLSI cell placement and routing.

J43
B. W. Wah and L.-C. Chu, ``Combinatorial Search Algorithms with Meta-Control: Modeling and Implementations,'' Int'l J. of Artificial Intelligence Tools, World Scientific Publishers, vol. 1, no. 3 (Sept. 1992), pp. 369-397.

In this paper, we model search algorithms with meta-control, allowing resource constraints, approximation, and parallel processing to be incorporated easily in the search process. The basic building block of the model is a hierarchical search process (HSP) consisting of context-free and context-sensitive grammars classified according to problem-independent and problem-dependent parts. The context-sensitive components are used mainly for evaluating decision parameters and in ordering production rules in the context-free grammar. The execution of the grammars for given initial conditions may invoke other HSPs already defined in the system. We describe ISE (acronym for Integrated Search Environment), a tool that implements hierarchical searches with meta-control. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed easily by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. We describe the organization of ISE and present experiments carried out on the system.

J44
B. W. Wah, ``Population-Based Learning: A New Method for Learning from Examples under Resource Constraints,'' IEEE Trans. on Knowledge and Data Engineering, vol. 4, no. 5 (Oct. 1992), pp. 454-474.

In this paper, we study a new learning model for designing heuristics automatically under resource constraints. We focus on improving performance-related heuristic methods (HMs) in knowledge-lean application domains. We assume that learning is episodic, that the performance measures of an episode are dependent only on the final state reached in evaluating the corresponding test case and are independent of the intermediate decisions and internal states, and that the aggregate performance measures of the HMs involved are independent of the order of evaluation of test cases. The learning model is based on testing a population of competing HMs for an application problem and switches from one to another dynamically, depending on the outcome of previous tests. Its goal is to find a good HM within the resource constraints, with proper tradeoff between cost and quality. It complements existing point-based machine learning models that maintain only one incumbent HM, and that tests the HM extensively before switching to alternative ones. It extends existing work on classifier systems by addressing issues related to delays in feedback, scheduling of tests of HMs under limited resources, anomalies in performance evaluation, and scalability of HMs. Finally, we describe our experience in applying the learning method on five application problems.

J46
B. W. Wah, T. S. Huang, A. K. Joshi, D. Moldovan, J. Aloimonos, R. K. Bajcsy, D. Ballard, D. DeGroot, K. DeJong, C. R. Dyer, S. E. Fahlman, R. Grishman, L. Hirschman, R. E. Korf, S. E. Levinson, D. P. Miranker, N. H. Morgan, S. Nirenburg, T. Poggio, E. M. Riseman, C. Stanfill, S. J. Stolfo, and S. L. Tanimoto, ``Report on Workshop on High Performance Computing and Communications for Grand Challenge Applications: Computer Vision, Speech and Natural Language Processing, and Artificial Intelligence,'' IEEE Transactions on Knowledge and Data Engineering, vol. 5, no. 1; also in Tech. Rep. CRHC-92-26, Center for Reliable and High Performance Computing, Coordinated Science Laboratory, Univ. of Illinois, Urbana, IL, February 1993, pp. 138-154.

This article reports the findings of the Workshop on High Performance Computing and Communications (HPCC) for Grand Challenge Applications: Computer Vision, Speech and Natural Language Processing (SNLP), and Artificial Intelligence (AI). The goals of the workshop are to identify applications, research problems, and designs of HPCC systems for supporting applications in these areas.

In computer vision, we have identified the main scientific issues as machine learning, surface reconstruction, inverse optics and integration, model acquisition, and perception and action. Since vision algorithms operate in different levels of granularity, computers for supporting these algorithms need to be heterogeneous and modular. Advances in technology, new architectural concepts, and software design methods are essential for this area.

In SNLP, we have identified issues in statistical analysis in corpus-based speech and language understanding, search strategies for language analysis, auditory and vocal-tract modeling, integration of multiple levels of speech and language analyses, and connectionist systems. Similar to algorithms in computer vision, algorithms in SNLP require high computational power, ranging from general purpose supercomputing to special pur- pose VLSI systems. As processing has various requirements, a hybrid heterogeneous computer system is the most desirable.

In AI, important issues that need immediate attention include the development of efficient machine learning and heuristic search methods that can adapt to different architectural configurations, and the design and construction of scalable and verifiable knowledge bases, active memories, and artificial neural networks. A computer system for supporting AI applications is heterogeneous, requiring research in high speed computer networks, mass storage and efficient retrieval methods, computer languages, and hardware and software design tools.

Research in these areas is inherently multidisciplinary and will require active participation of researchers in device and networking technologies, signal processing, computer architecture, software engineering, and knowledge engineering. Besides extending current frontiers in research, an important aspect to be emphasized is the integration of existing components and results into working systems.

J47
Akiko N. Aizawa and Benjamin W. Wah, ``A Sequential Sampling Procedure for Genetic Algorithms: Computers and Mathematics with Applications, Pergamon Press, Ltd., Tarrytown, NY, 1994, vol. 27, no. 9/10, pp. 77-82. Also presented at the 5th Int'l Workshop of the Bellman Continum Waikoloa, Hawaii, Jan. 1993.

In this paper, we develop new methods for adjusting configuration parameters of genetic algorithms operating in a noisy environment. Such methods are related to the scheduling of resources for tests performed in genetic algorithms. Assuming that the population size is given, we address two problems related to the design of efficient scheduling algorithms specifically important in noisy environments. First, we study the duration-scheduling problem that is related to setting dynamically the duration of each generation. Second, we study the sample-allocation problem that entails the adaptive determination of the number of evaluations taken from each candidate in a generation. In our approach, we model the search process as a statistical selection process and derive equations useful for these problems. Our results show that our adaptive procedures improve the performance of genetic algorithms over that of commonly used static ones.

J48
Kumar Ganapathy and Benjamin Wah, ``Optimal Synthesis of Processor Arrays with Pipelined Arithmetic Units from Uniform Recurrence Equations,'' Parallel Processing Letters, World Scientific Pub. Co., 1994, vol. 4, no. 3, pp. 339-350.

Two-level pipelining in processor arrays (PAs) involves pipelining of operations across processing elements (PEs) and pipelining of operations in functional units in each PE. Although it is an attractive method for improving the throughput of PAs, existing methods for generating PAs with two-level pipelining are restricted and cannot systematically explore the entire space of feasible designs. In this paper, we extend a systematic design method, called General Parameter Method (GPM), we have developed earlier to find optimal designs of PAs with two-level pipelines. The basic idea is to add new constraints on periods of data flows to include the effect of internal functional pipelines in the PEs. As an illustration, we present pipelined PA designs for computing matrix products. For n-dimensional meshes and other symmetric problems, we provide an efficient scheme to obtain a pipelined PA from a non-pipelined PA using a reindexing transformation. This scheme is used in GPM as a pruning condition to arrive at optimal pipelined PAs efficiently. For pipelines with minimum initiation interval (MII) greater than unity, we show additional constraints that ensure correctness of the synthesized PAs.

J49
Akiko Aizawa and Benjamin Wah, ``Scheduling of Genetic Algorithms in a Noisy Environment,'' Evolutionary Computation, Academic Press, vol. 2, no. 2, 1994, pp. 97-122.

In this paper, we have studied the scheduling of computational resources for genetic algorithms operating in a noisy environment. Assuming that the number of candidates in each generation is fixed, we have studied two related scheduling problems: determining a suitable duration for a generation, and finding the appropriate number of samples to be drawn from each candidate. Our results show that dynamic scheduling policies perform better that existing static ones. Although we have used simple test functions in our experiments, the advantage of using dynamic scheduling is universal and is independent of the reproduction mechanism of individual genetic algorithms.

J50
Kumar N. Ganapathy and Benjamin W. Wah, ``Optimal Synthesis of Algorithm-Specific Lower-Dimensional Processor Arrays,'' IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 3 (March 1996), pp. 274-287.

Processor arrays are frequently used to deliver high performance in many applications with computationally intensive operations. This paper presents the General Parameter Method (GPM), a systematic parameter-based approach for synthesizing such algorithm-specific architectures. GPM can synthesize processor arrays of any lower dimension from a uniform-recurrence description of the algorithm. The design objective is a general non-linear and non-monotonic user-specified function, and depends on attributes such as computation time of the recurrence on the processor array, completion time, load time, and drain time. In addition, bounds on some or all of these attributes can be specified. GPM performs an efficient search of polynomial complexity to find the optimal design satisfying the user-specified design constraints. As an illustration, we show how GPM can be used to find optimal linear processor arrays for computing transitive closures. We consider design objectives that minimize computation time, or processor count, or completion time (including load and drain times), and user-specified constraints on number of processing elements and/or computation/completion times. We show that GPM can be used to obtain optimal designs that trade between number of processing elements and completion time, thereby allowing the designer to choose a design that best meets the specified design objectives. We also show the equivalence between the model assumed in GPM and that in the popular dependence-based methods. Consequently, GPM can be used to find optimal designs for both models.

J51
M. Aboelaze and B. W. Wah, ``Processor Array with Bounded I/O Ports for computing Transitive Closures,`` Journal of Parallel and Distributed Computing, Academic Press, vol. 29, no. 1 (August 1995), pp. 84-90.

In this paper, we present the design of a processor array (PA) with bounded number of input/output (I/O) ports (L-by-N PEs with O(L) I/O ports, where 1 leq L leq N) for computing transitive closures. Previous designs of PAs to solve this problem use either N-by-N PAs with O(N) I/O ports or linear PAs with O(1) I/O ports. These designs are restricted by their I/O bandwidth because designs using O(N) I/O ports are limited by the number of PEs that can be supported by these ports, whereas designs with O(1) ports are limited by the I/O time to load and drain data. In contrast, our proposed design uses O(L) I/O ports, where L is a technology-dependent design parameter. In our proposed architecture, we have used more powerful processing elements (PEs), each with a microprogram-driven ALU, a limited number of I/O buffers and O(N/L) memory words. Our technique for mapping transitive-closure computations consists of four steps. (1) Starting from a mapping of a transitive-closure problem on an N-by-N PA, we map the computations into an L-by-N PA. (2) We then schedule the computations in the L-by-N PA in order to satisfy the dependencies. (3) Based on the new mapping, we derive speeds and directions of data flows and the number of buffers required in each PE. (4) Finally, we derive the microprogram in each PE for correct execution. We show that our design specializes to efficient designs in the boundary cases when L=1 (linear PA) and L=N (square mesh).

J52
Benjamin W. Wah and Yi Shang, ``A Comparative Study of IDA*-Style Searches,'' Int'l Journal of Tools with Artificial Intelligence, World Scientific, vol. 3, no. 4 (Oct. 1995), pp. 493-523.

In this paper, we study the performance of various IDA*-style searches and investigate methods to improve their performance by predicting in each stage the threshold to be used for pruning. Without loss of generality, we consider minimization problems in this paper. We first present three models to approximate the distribution of the number of search nodes by lower bounds: exponential, geometric, and linear, and illustrate these distributions based on some well-known combinatorial search problems. Based on these distributions, we show the performance of an ideal IDA* algorithm and identify reasons why existing IDA*-style algorithms perform well. In practice, we will be able to know from experience the type of distribution for a given problem instance, but will not be able to know the parameters of this distribution until the instance is solved. Hence, we develop RIDA*, a method that estimates dynamically the parameters of the distribution, and predicts the best threshold to be used in each stage. Finally, we compare the performance of several IDA*-style algorithms --- Korf's IDA* and RBFS, RIDA*, IDA*_CR and DFS* --- on several application problems, and identify conditions under which each of these algorithms will perform well.

J53
Pankaj Mehra and Benjamin W. Wah, ``Synthetic Workload Generation for Load-balancing Experiments,'' IEEE Parallel and Distributed Technology, vol. 3, no. 3 (Fall 1995), pp. 4-19.

This paper describes Dynamic Workload Generator (DWG), a facility for generating realistic and reproducible synthetic workloads for use in load-balancing experiments. For such experiments, the generated workload must not only mimic the highly dynamic resource-utilization patterns found on today's distributed systems but also behave as a real workload does when test jobs are run concurrently with it. The latter requirement is important in testing alternative load-balancing strategies, a process that requires running the same job multiple times, each time at a different site but under an identical network-wide workload. Parts of DWG are implemented inside the operating-system kernel and have complete control over the utilization levels of four key resources: CPU, memory, disk, and network. Besides accurately replaying network-wide load patterns recorded earlier, DWG gives up a fraction of its resources each time a new job arrives and reclaims these resources upon job completion. The latter operation is controlled by pattern-doctoring rules implemented in DWG. We present DWG's architecture, its doctoring rules, systematic methods for adjusting and evaluating doctoring rules, and experimental results on a network of Sun workstations.

J54
Benjamin W. Wah and Yao-Jen Chang, ``Trace-Based Methods for Solving Nonlinear Global Optimization and Satisfiability Problems,'' J. of Global Optimization, vol. 10, no. 2 (March 1997), pp. 107-141.

In this paper we present a method called Novel (Nonlinear Optimization via External Lead) for solving continuous and discrete global optimization problems. Novel addresses the balance between global search and local search, using a trace to aid in identifying promising regions before committing to local searches. We discuss Novel for solving continuous constrained optimization problems and show how it can be extended to solve constrained satisfaction and discrete satisfiability problems. We first transform the problem using Lagrange multipliers into an unconstrained version. Since a stable solution in a Lagrangian formulation only guarantees a local optimum satisfying the constraints, we propose a global search phase in which an aperiodic and bounded trace function is added to the search to first identify promising regions for local search. The trace generates an information-bearing trajectory from which good starting points are identified for further local searches. Taking only a small portion of the total search time, this elegant approach significantly reduces unnecessary local searches in regions leading to the same local optimum. We demonstrate the effectiveness of Novel on a collection of continuous optimization benchmark problems, finding the same or better solutions while satisfying the constraints. We extend Novel to discrete constraint satisfaction problems (CSPs) by showing an efficient transformation method for CSPs and the associated representation in finite-difference equations in Novel. We apply Novel to solve Boolean satisfiability instances in circuit fault detection and circuit synthesis applications, and show comparable performance when compared to the best existing method.

J55
Yi Shang and Benjamin W. Wah, ``Global Optimization for Neural Network Training,'' IEEE Computer Magazine, vol. 29, no. 3 (March 1996), pp. 45-54.

In this paper, we have studied various supervised learning methods for training feed-forward neural networks. In general, such learning can be considered as a nonlinear global optimization problem in which the goal is to minimize a nonlinear error function that spans the space of weights, using heuristic strategies that look for global optima (in contrast to local optima). We have surveyed various global optimization methods suitable for neural-network learning, and have proposed the Novel method (Nonlinear global Optimization method Via External Lead) for nonlinear optimization and neural network learning. By combining global and local searches, we show how Novel can be used to find a good local minimum in the error space. Our key idea is to use a user-defined trace that pulls a search out of a local minimum without having to restart it from a new starting point. Using five benchmark problems, we have compared Novel against some of the best global optimization algorithms and have demonstrated its superior improvement in performance.

The key points studied and summarized in this paper are as follows: (a) Global optimization involves strategies to escape from local minima after reaching there, (b) A good global search strategyy can find better local minima, (c) The search space of small neural networks is more rugged, requiring more powerful global search methods to get convergence, (d) A better local minimum in the error space does not always lead to neural networks that generalize better, (e) Smaller networks tend to generalize better.

Sun Sparc object code training software and results

J56
Benjamin W. Wah and Lon-Chan Chu, ``TCGD: A Time-Constrained Approximate Guided Depth-First Search Algorithm,'' International Journal on Artificial Intelligence Tools, World Scientific Publishing Co., Pte., vol. 6, no. 2 (1997), pp. 255-271.

In this paper, we develop TCGD, a problem-independent, time-constrained, approximate guided depth-first search (GDFS) algorithm. The algorithm is designed to achieve the best ascertained approximation degree under a fixed time constraint. We consider only searches with finite search space and admissible heuristic functions. We study NP-hard combinatorial optimization problems with polynomial-time computable feasible solutions. For the problems studied, we observe that the execution time increases exponentially as approximation degree decreases, although anomalies may happen. The algorithms we study are evaluated by simulations using the symmetric traveling-salesperson problem.

J57
Chin-Chi Teng and Benjamin W. Wah, ``Automated Learning for Reducing the Configuration of a Feed-Forward Neural Network,'' IEEE Transactions on Neural Networks, vol. 7, no. 5 (Sept. 1996), pp. 1072-1085.

In this paper, we present two learning mechanisms for artificial neural networks (ANNs) that can be applied to solve classification problems with binary outputs. These mechanisms are used to reduce the number of hidden units of an ANN when trained by the cascade-correlation learning algorithm (CAS). Since CAS adds hidden units incrementally as learning proceeds, it is difficult to predict the number of hidden units required when convergence is reached. Further, learning must be restarted when the number of hidden units is larger than expected. Our key idea in this paper is to provide alternatives in the learning process and to select the best alternative dynamically based on run-time information obtained. Mixed-mode learning (MM), our first algorithm, provides alternative output matrices so that learning is extended to find one of the many one-to-many mappings instead of finding a unique one-to-one mapping. Since the objective of learning is relaxed by this transformation, the number of learning epochs can be reduced. This in turn leads to a smaller number of hidden units required for convergence. Population-based Learning for ANNs (PLAN), our second algorithm, maintains alternative network configurations and to select at run time promising networks to train based on error information obtained and time remaining. This dynamic scheduling avoids training possibly unpromising ANNs to completion before exploring new ones. We show the performance of these two mechanisms by applying them to solve the two-spiral problem, a two-region classification problem, and the Pima Indian Diabetes Diagnosis problem.

J58
Arthur Ieumwananonthachai and Benjamin W. Wah, ``Statistical Generalization of Performance-Related Heuristics for Knowledge-Lean Applications,'' Int'l Journal of Tools with Artificial Intelligence, World Scientific, vol. 5, nos. 1 & 2 (June 1996), pp. 61-79.
In this paper, we present new results on the automated generalization of performance-related heuristics learned for knowledge-lean applications. By first applying genetics-based learning to learn new heuristics for some small subsets of test cases in a problem space, we study methods to generalize these heuristics to unlearned subdomains of test cases. Our method uses a new statistical metric called probability of win. By assessing the performance of heuristics in a range-independent and distribution-independent manner, we can compare heuristics across problem subdomains in a consistent manner. To illustrate our approach, we show experimental results on generalizing heuristics learned for sequential circuit testing, VLSI cell placement and routing, branch-and-bound search, and blind equalization. We show that generalization can lead to new and robust heuristics that perform better than the original heuristics across test cases of different characteristics.

J59
Pankaj Mehra and Benjamin W. Wah ``Strategy Learning: A Survey of Problems, Methods, and Architectures Int'l Journal of Tools with Artificial Intelligence, World Scientific, vol. 7, no. 4, Dec. 1998, pp. 487-550.
Problem solvers employ strategies in searching for solutions to given problem instances. Strategies have traditionally been designed by experts using prior knowledge and refined manually using trial and error. Recent attempts to automate these processes have produced strategy-learning systems. This paper shows how various issues in strategy learning are affected by the nature of performance tasks, problem solvers, and learning environments. Surveyed learning systems are grouped by the commonality of their approaches into four general architectures.

J60
Yi Shang and Benjamin W. Wah, ``A Discrete Lagrangian-Based Global-Search Method for Solving Satisfiability Problems,'' Journal of Global Optimization, Kluwer Academic Publishers, vol. 12, no. 1, Jan. 1998, pp. 61-99.

Satisfiability is a class of NP-complete problems that model a wide range of real-world applications. These problems are difficult to solve because they have many local minima in their search space, often trapping greedy search methods that utilize some form of descent. In this paper, we propose a new discrete Lagrange-multiplier-based global-search method for solving satisfiability problems. We derive new approaches for applying Lagrangian methods in discrete space, show that equilibrium is reached when a feasible assignment to the original problem is found, and present heuristic algorithms to look for equilibrium points. Our method and analysis provides a theoretical foundation and generalization of local search schemes that optimize the objective alone and clause-weight schemes that optimize the constraints alone. In contrast to local search methods that restart from a new starting point when a search reaches a local trap, the Lagrange multipliers in our method provide a force to lead the search out of a local minimum and move it in the direction provided by the Lagrange multipliers. In contrast to clause-weight schemes that rely only on the weights of violated constraints to escape from local minima, our method also uses the value of an objective function (in this case the number of violated constraints) to provide further guidance. The dynamic shift in emphasis between the objective and the constraints, depending on their relative values, is the key of Lagrangian methods. One of the major advantages of our method is that it has very few algorithmic parameters to be tuned by users, and the search procedure can be made deterministic and the results, reproducible. We demonstrate our method by applying it to solve an extensive set of benchmark problems archived in DIMACS of Rutgers University. Our method often performs better than the best existing methods and can achieve an order-of-magnitude speedup for some problems.

J61
Pankaj Mehra and Benjamin W. Wah, ``Automated Learning of Load-Balancing Strategies in Multiprogrammed Distributed Systems,'' International Journal of System Sciences, vol. 28, no. 11, pp. 1077-1100, November 1997.

Dynamic load-balancing strategies for distributed systems seek to improve average completion time of independent tasks by migrating each incoming task to the site where it is expected to finish the fastest: usually the site having the smallest load index. SMALL is an off-line learning system for developing configuration-specific load-balancing strategies; it learns new load indices as well as tunes the parameters of given migration policies. Using a dynamic workload generator, a number of typical systemwide load patterns are first recorded; the completion times of several benchmark jobs are then measured at each site, under each of the recorded load patterns. These measurements are used to simultaneously train comparator neural networks, one per site. The comparators collectively model a set of perfect load indices in that they seek to rank, at arrival time, the possible destinations for an incoming task by their (not yet known) respective completion times. The numerous parameters of the decentralized dynamic load-balancing policy are then tuned using a genetic algorithm. We present experimental results for a mix of scientific and interactive workloads on Sun workstations connected by Ethernet. The policies tuned by SMALL are shown to intelligently and effectively exploit idle resources.

J62
Kumar N. Ganapathy, Benjamin W. Wah, and Chien-Wei Li, ```Designing a Scalable Processor Array for Recurrent Computations,'' IEEE Trans. on Parallel and Distributed Systems, vol. 8, no. 8, August 1997, pp. 840-856.

In this paper, we study the design of a coprocessor (CoP) to execute efficiently recursive algorithms with uniform dependencies. Our design is based on two objectives: (a) fixed bandwidth to main memory (MM) and (b) scalability to higher performance without increasing MM bandwidth. Our CoP has an access unit (AU) organized as multiple queues, a processor array (PA) with regularly connected processing elements (PEs), and input/output networks for data routing. Our design is unique because it addresses input/output bottleneck and scalability, two of the most important issues in integrating processor arrays in current systems. To allow processor arrays to be widely usable, they must be scalable to high performance with little or no impact on the supporting memory system. The use of multiple queues in AU also eliminates the use of explicit data addresses, thereby simplifying the design of the control program. We present a mapping algorithm that partitions a data dependence graph (DG) of an application into regular blocks, sequences the blocks through AU, and schedules the execution of the blocks, one at a time, on PA. We show that our mapping procedure minimizes the amount of communication between blocks in the partitioned DG, and sequences the blocks through AU to reduce the communication between AU and MM. Using the matrix-product and transitive-closure applications, we study design trade-offs involving (a) division of a fixed chip area between PA and AU, and (b) improvements in speedup with respect to increases in chip area. Our results show, for a fixed chip area, (a) that there is little degradation in throughput in using a linear PA as compared to a PA organized as a square mesh, and (b) that the design is not sensitive to the division of chip area between PA and AU. We further show that, for a fixed throughput, there is an inverse square root relationship between speedup and total chip area. Our study demonstrates the feasibility of a low-cost, memory bandwidth-limited, and scalable coprocessor system for evaluating recurrent algorithms with uniform dependencies.

J63
Tao Wang and Benjamin W. Wah, ``Handling Inequality Constraints in Continuous Nonlinear Global Optimization,'' Journal of Integrated Design and Process Science, Society for Design and Process Science, vol. 2, no. 3, 1998, pp. 1-10.

In this paper, we present a new method to handle inequality constraints and apply it in Novel (Nonlinear Optimization via External Lead), a system we have developed for solving constrained continuous nonlinear optimization problems. In general, in applying Lagrange-multiplier methods to solve these problems, inequality constraints are first converted into equivalent equality constraints. One such conversion method adds a slack variable to each inequality constraint in order to convert it into an equality constraint. The disadvantage of this conversion is that when the search is inside a feasible region, some satisfied constraints may still pose a nonzero weight in the Lagrangian function, leading to possible oscillations and divergence when a local optimum lies on the boundary of a feasible region. We propose a new conversion method called the MaxQ method such that all satisfied constraints in a feasible region always carry zero weight in the Lagrange function; hence, minimizing the Lagrange function in a feasible region always leads to local minima of the objective function. We demonstrate that oscillations do not happen in our method. We also propose methods to speed up convergence when a local optimum lies on the boundary of a feasible region. Finally, we show improved experimental results in applying our proposed method in Novel on some existing benchmark problems and compare them to those obtained by applying the method based on slack variables.

J64
Benjamin W. Wah and Tao Wang, ``Efficient and Adaptive Lagrange-Multiplier Methods for Nonlinear Continuous Global Optimization,'' Journal of Global Optimization, Kluwer Academic Press, vol. 14, no. 1, Jan. 1999, pp. 1-25.

Lagrangian methods are popular in solving continuous constrained optimization problems. In this paper, we address three important issues in applying Lagrangian methods to solve optimization problems with inequality constraints. First, we study methods to transform inequality constraints into equality constraints. An existing method, called the slack-variable method, adds a slack variable to each inequality constraint in order to transform it into an equality constraint. Its disadvantage is that when the search trajectory is inside a feasible region, some satisfied constraints may still pose some effect on the Lagrangian function, leading to possible oscillations and divergence when a local minimum lies on the boundary of the feasible region. To overcome this problem, we propose the MaxQ method that carries no effect on satisfied constraints. Hence, minimizing the Lagrangian function in a feasible region always leads to a local minimum of the objective function. We also study some strategies to speed up its convergence. Second, we study methods to improve the convergence speed of Lagrangian methods without affecting the solution quality. This is done by an adaptive-control strategy that dynamically adjusts the relative weights between the objective and the Lagrangian part, leading to better balance between the two and faster convergence. Third, we study a trace-based method to pull the search trajectory from one saddle point to another in a continuous fashion without restarts. This overcomes one of the problems in existing Lagrangian methods that converges only to one saddle points and requires random restarts to look for new saddle points, often missing good saddle points in the vicinity of saddle points already found. Finally, we describe a prototype Novel (Nonlinear Optimization via External Lead) that implements our proposed strategies and present improved solutions in solving a collection of benchmarks.

J65
Benjamin W. Wah ``Generalization and Generalizability Measures,'' IEEE Transactions on Knowledge and Data Engineering, vol. 11, no. 1, Jan-Feb 1999, pp. 175-186.

In this paper, we define the generalization problem, summarize various approaches in generalization, identify the credit assignment problem, and present the problem and some solutions in measuring generalizability. We discuss anomalies in the ordering of hypotheses in a subdomain when performance is normalized and averaged, and show conditions under which anomalies can be eliminated. To generalize performance across subdomains, we present a measure called probability of win that measures the probability whether a hypothesis is better than another. Finally, we discuss some limitations in using probabilities of win and illustrate their application in finding new parameter values for TimberWolf, a package for VLSI cell placement and routing.

J66
Benjamin W. Wah, Y. Shang, and Zhe Wu ``Discrete Lagrangian Methods for Optimizing the Design of Multiplierless QMF Banks,'' IEEE Trans. on Circuits and Systems, Part II, vol. 46, no. 9, Sept. 1999, pp. 1179-1191.

In this paper, we present a new discrete Lagrangian method for designing multiplierless QMF (quadrature mirror filter) banks. The filter coefficients in these filter banks are in powers-of-two (PO2), where numbers are represented as sums or differences of powers of two (also called Canonical Signed Digit--CSD--representation), and multiplications are carried out as additions, subtractions and shifts. We formulate the design problem as a nonlinear discrete constrained optimization problem, using reconstruction error as the objective, and stopband and passband energies, stopband and passband ripples and transition bandwidth as constraints. Using the performance of the best existing designs as constraints, we search for designs that improve over the best existing designs with respect to all the performance metrics. We propose a new discrete Lagrangian method for finding good designs, and study methods to improve the convergence speed of Lagrangian methods without affecting their solution quality. This is done by adjusting dynamically the relative weights between the objective and the Lagrangian part. We show that our method can find designs that improve over Johnston's benchmark designs using a maximum of three to six ONE bits in each filter coefficient instead of using floating-point representations. Our approach is general and is applicable to the design of other types of multiplierless filter banks.

J67
Benjamin W. Wah and Zhe Wu ``Discrete Lagrangian Methods for Designing Multiplierless Two-Channel PR-LP Filter Banks,'' Journal of VLSI Signal Processing, Kluwer Academic Press, volume 21, no. 2, June 1999, pp. 131-150

In this paper, we present a new search method based on the theory of discrete Lagrange multipliers for designing multiplierless PR (perfect reconstruction) LP (linear phase) filter banks. To satisfy the PR constraints, we choose a lattice structure that, under certain conditions, can guarantee the resulting two filters to be a PR pair. Unlike the design of multiplierless QMF filter banks that represents filter coefficients directly using PO2 (powers-of-two) form (also called Canonical Signed Digit or CSD representation), we use PO2 forms to represent the parameters associated with the lattice structure. By representing these parameters as sums or differences of powers of two, multiplications can be carried out as additions, subtractions, and shifts. Using the lattice representation, we decompose the design problem into a sequence of four subproblems. The first two subproblems find a good starting point with continuous parameters using a single-objective, multi-constraint formulation. The last two subproblems first transform the continuous solution found by the second subproblem into a PO2 form, then search for a design in a mixed-integer space. We propose a new search method based on the theory of discrete Lagrange multipliers for finding good designs, and study methods to improve its convergence speed by adjusting dynamically the relative weights between the objective and the Lagrangian part. We show that our method can find good designs using at most four terms in PO2 form in each lattice parameter. Our approach is unique because our results are the first successful designs of multiplierless PR-LP filter banks. It is general because it is applicable to the design of other types of multiplierless filter banks.

J68
Benjamin W. Wah and Dong Lin, ``Transformation-Based Reconstruction For Real-Time Voice Transmissions Over the Internet,'' IEEE Trans. on Multimedia, IEEE, vol. 1, no. 4, Dec. 1999, pp. 342-351.

In this paper, we explore the loss behavior encountered in transmitting real-time voice over the Internet and propose a new loss-concealment scheme to improve its received quality. One known technique to conceal loss is to send interleaved streams of voice samples and reconstruct missing or late samples by interpolation at the receiver. Based on this method, we propose a new transformation-based reconstruction algorithm. Its basic idea is for the sender to transform an input voice stream, according to the interpolation method used at the receiver and the predicted loss behavior, before interleaving the stream. The transformation is derived by minimizing reconstruction error in case of loss. We show that our method is computationally efficient and can be extended to various interleaving factors and interpolation- based reconstruction methods. Finally, we show performance improvements of our method by testing it over the Internet.

J69
Benjamin W. Wah, Tao Wang, Yi Shang, and Zhe Wu, ``Improving the Performance of Weighted Lagrange-Multiplier Methods for Nonlinear Constrained Optimization,'' Information Sciences, Elsevier Science Pub. Co., vol. 124, no. 1-4, May 2000, pp. 241-272.

Nonlinear constrained optimization problems in discrete and continuous spaces are an important class of problems studied extensively in operations research and artificial intelligence. These problems can be solved by a Lagrange-multiplier method in continuous space and by an extended discrete Lagrange-multiplier method in discrete space. When constraints are satisfied, these methods rely on gradient descents in the objective space to find high-quality solutions. On the other hand, when constraints are violated, these methods rely on gradient ascents in the Lagrange-multiplier space in order to increase the penalties on unsatisfied constraints and to force these constraints into satisfaction. The balance between gradient descents and gradient ascents depends on the relative weights between the objective function and the constraints, which indirectly control the convergence speed and solution quality of the Lagrangian method. To improve convergence speed without degrading solution quality, we propose an algorithm to dynamically control the relative weights between the objective and the constraints. Starting from an initial weight, the algorithm automatically adjusts the weights based on the behavior of the search progress. With this strategy, we are able to eliminate divergence, reduce oscillations, and speed up convergence. We show improved convergence behavior of our proposed algorithm on both nonlinear continuous and discrete problems.

J70
Benjamin W. Wah and Tao Wang, ``Tuning Strategies of Constrained Simulated Annealing for Nonlinear Global Optimization,'' Int'l J. of Artificial Intelligence Tools, World Scientific Publishing Co. Pte. Ltd., vol. 9, no. 1, March 2000, pp. 3-25

This paper studies various strategies in constrained simulated annealing (CSA), a global optimization algorithm that achieves asymptotic convergence to constrained global minima (CGM) with probability one for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for discrete constrained local minima (CLM) in the theory of discrete Lagrange multipliers and its extensions to continuous and mixed-integer constrained NLPs. The strategies studied include adaptive neighborhoods, distributions to control sampling, acceptance probabilities, and cooling schedules. We report much better solutions than the best-known solutions in the literature on two sets of continuous benchmarks and their discretized versions.

J71
Benjamin W. Wah and Zhe Wu, ``Penalty Formulations and Trap-Avoidance Strategies for Solving Hard Satisfiability Problems,'' J. of Computer Science and Technology, Springer-Verlag, vol. 20, no. 1, Jan. 2005, pp. 3-17

In this paper we study the solution of SAT problems formulated as discrete decision and discrete constrained optimization problems. Constrained formulations are better than traditional unconstrained formulations because violated constraints may provide additional forces to lead a search towards a satisfiable assignment. We summarize the theory of extended saddle points in penalty formulations for solving discrete constrained optimization problems and the associated discrete penalty method (DPM). We then examine various formulations of the objective function, choices of neighborhood in DPM, strategies for updating penalties, and heuristics for avoiding traps. Experimental evaluations on hard benchmark instances pinpoint that traps contribute significantly to the inefficiency of DPM and force a trajectory to repeatedly visit the same set of or nearby points in the original variable space. To address this issue, we propose and study two trap-avoidance strategies. The first strategy adds extra penalties on unsatisfied clauses inside a trap, leading to very large penalties for unsatisfied clauses that are trapped more often and making these clauses more likely to be satisfied in the future. The second strategy stores information on points visited before, whether inside traps or not, and avoids visiting points that are close to points visited before. It can be implemented by modifying the penalty function in such a way that, if a trajectory gets close to points visited before, an extra penalty will take effect and force the trajectory to a new region. It specializes to the first strategy because traps are special cases of points visited before. Finally, we show experimental results on evaluating benchmarks in the DIMACS and SATLIB archives and compare our results with existing results on GSAT, WalkSAT, LSDL, and Grasp. The results demonstrate that DPM with trap avoidance is robust as well as effective for solving hard SAT problems.

J72
Xiao Su and Benjamin W. Wah, ``Multi-Description Video Streaming with Optimized Reconstruction-Based DCT and Neural-Network Compensations,'' IEEE Trans. on Multimedia, vol. 3, no. 1, March 2001, pp. 123-131.

Packet and compression losses are two sources of quality losses when streaming compressed video over unreliable IP networks, such as the Internet. In this paper, we propose two new approaches for concealing such losses. First, we present a joint sender-receiver approach for designing transforms in multi-description coding (MDC). In the receiver, we use a simple interpolation-based reconstruction algorithm, as sophisticated concealment techniques cannot be employed in real time. In the sender, we design an optimized reconstruction-based discrete cosine transform (ORB-DCT) with an objective of minimizing the mean squared error, assuming that some of the descriptions are lost and that the missing information is reconstructed by simple averaging at the destination. Second, we propose an artificial neural network to compensate for compression losses introduced in MDC. Experimental results show that our proposed algorithms perform well in real Internet tests.

J73
Benjamin W. Wah and Minglun Qian, ``Violation-Guided Neural-Network Learning for Constrained Formulations in Time-Series Predictions,'' Int'l Journal on Computational Intelligence and Applications, World Scientific, vol. 1, no. 4, Dec. 2001, pp. 383-398.

Time-series predictions by artificial neural networks (ANNs) are traditionally formulated as unconstrained optimization problems. As an unconstrained formulation provides little guidance on search directions when a search gets stuck in a poor local minimum, we have proposed to use a constrained formulation in order to use constraint violations to provide additional guidance. In this paper, we formulate ANN learning with cross-validations for time-series predictions as a non-differentiable nonlinear constrained optimization problem. Based on our theory of Lagrange multipliers for discrete constrained optimization, we propose an efficient learning algorithm, called violation guided back-propagation (VGBP), that computes an approximate gradient using back-propagation (BP), that introduces annealing to avoid blind acceptance of trial points, and that applies a relax-and-tighten (R&T) strategy to achieve faster convergence. Extensive experimental results on well-known benchmarks, when compared to previous work, show one to two orders-of-magnitude improvement in prediction quality, while using less weights.

J74
Yi Shang, Longzhuang Li, and Benjamin W. Wah, ``Optimization Design of Biorthogonal Filter Banks for Image Compression,'' Information Sciences, Elsevier Science Pub., vol. 132, no. 1, Feb. 2001, pp. 23-51.

In this paper, we present a new approach for designing filter banks for image compression. This approach has two major components: optimization and generalization. In the optimization phase, we formulate the design problem as a nonlinear optimization problem whose objective consists of both the performance metrics of the image coder, such as the peak signal-to-noise ratio (PSNR), and those of individual filters. Filter banks are optimized in the optimization phase based on a set of training images. In the generalization phase, the filter bank that can be generalized to other images is selected from the candidates obtained in the optimization phase to be the final result. The filter bank selected should perform well not only on the training examples used in the design process but also on test cases not seen. In contrast to existing methods that design filter banks independently from the other operations in an image compression algorithm, our approach allows us to find filter banks that work best in a specific image compression algorithm for a certain class of images. In system prototype development, we adopt the agent-based approach to achieve better modularity, portability, and scalability. Agents in the multi-agent system are specialized in performing problem formulation, image compression, optimization, and generalization. In the experiments, we show that on a set of benchmark images our approach has found filter banks that perform better than the existing filter banks in different image compression algorithms and at different compression ratios.

J75
Xiao Su and Benjamin W. Wah, ``Reconstruction-Based Subband Image Coding for UDP Transmissions over the Internet,'' Journal of VLSI Signal Processing, Kluwer Academic Press, vol. 34, no. 1-2, May-June 2003, pp. 29-48.

Quality-delay trade-offs can be made in transmitting subband-coded images in the Internet by using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered primarily by TCP today, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images and the reconstruction of missing information at the receiver based on information received. We first determine empirically the interleaving factors that should be used in order to keep the probability of unrecoverable packet losses sufficiently small. Next, we propose a joint sender-receiver approach for designing transforms in multi-description subband coding. In the receiver, we use a simple interpolation-based reconstruction algorithm, as sophisticated concealment techniques cannot be employed in practice. In the sender, we design an optimized reconstruction-based subband transform (ORB-ST), with an objective of minimizing the mean squared error, assuming that some of the descriptions are lost and that the missing information is reconstructed by simple averaging at the destination. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP delivery of MDC images is an attractive alternative to TCP delivery.

J76
Benjamin W. Wah and Dong Lin, ``LSP-Based Multiple-Description Coding for Real-Time Low Bit-Rate Voice over IP,'' IEEE Trans. on Multimedia, Vol. 7, No. 1, Feb. 2005, pp. 167-178.

A fundamental issue in real-time interactive voice transmissions over unreliable IP networks is the loss or late arrival of packets for playback. This problem is especially serious when transmitting low bit rate-coded speech with pervasive dependencies introduced. In this case, the loss or late arrival of a single packet will lead to the loss of subsequent dependent frames. In this paper, we study end-to-end loss-concealment schemes for ensuring high quality in playback. We propose a novel multiple description-coding method for concealing packet losses in transmitting low bit rate-coded speech. Based on high correlations observed in linear predictor parameters--in the form of Line Spectral Paris (LSPs)--of adjacent frames, we generate multiple descriptions in senders by interleaving LSPs, and reconstruct lost LSPs in receivers by linear interpolations. As excitation codewords have low correlations, we further enlarge the segment size for excitation generation and replicate excitation codewords in all the descriptions in order to maintain the same transmission bandwidth. Our proposed scheme can be extended easily to more than two descriptions and can adapt its number of descriptions dynamically to network-loss conditions. Experimental results on FS-1016 CELP, ITU G.723.1, and FS MELP coders show good performance of our scheme.

J77
Xindong Wu, Philip S. Yu, Gregory Piatetsky-Shapiro, Nick Cercone, T. Y. Lin, Ramamohanarao Kotagiri, and Benjamin W. Wah,
``Data Mining: How Research Meets Practical Development,''
Knowledge and Information Systems: An International Journal, Springer-Verlag, vol. 5, no. 2, April 2003, pp. 248-261.
At the 2001 IEEE International Conference on Data Mining in San Jose, California on November 29 - December 2, 2001, there was a panel discussion on how data mining research meets practical development. One of the motivations for organizing the panel discussion was to provide useful advice for industrial people to explore their directions in data mining development. Based on the panel discussion, this paper presents the views and arguments from the panel members, the Conference Chair and the Program Committee Co-Chairs. These people as a group have both academic and industrial experiences in different data mining related areas such as databases, machine learning, and neural networks. We will answer such as (1) how far is data mining from practical development, (2) how data mining research differs from practical development, and (3) what are the most promising areas in data mining for practical development?

J78
Benjamin W. Wah and Yixin Chen,
``Hybrid Evolutionary and Annealing Algorithms for Nonlinear Discrete Constrained Optimization,''
Int'l Journal of Computational Intelligence and Applications, Imperial College Press, UK, vol. 3, no. 4, Dec. 2003, pp. 331-355.

This paper presents a procedural framework that unifies various mechanisms to look for discrete-neighborhood saddle points in solving discrete constrained optimization problems (DCOPs). Our approach is based on the necessary and sufficient condition on local optimality in discrete space, which shows the one-to-one correspondence between the discrete-space constrained local minima of a problem and the saddle points of the corresponding Lagrangian function. To look for such saddle points, we study various mechanisms for performing ascents of the Lagrangian function in the original-variable subspace and descents in the Lagrange-multiplier subspace. Our results show that CSAEA, a combined constrained simulated annealing and evolutionary algorithm, performs well when using mutations and crossovers to generate trial points and accepting them based on the Metropolis probability. We apply iterative deepening to determine the optimal number of generations in CSAEA and show that its performance is robust with respect to changes in population size. To test the performance of the procedures developed, we apply them to solve some continuous and mixed-integer nonlinear programming (NLP) benchmarks and show that they obtain better results than those of existing algorithms.

J79
Benjamin W. Wah and Yixin Chen,
``Constraint Partitioning in Penalty Formulations for Solving Temporal Planning Problems,''
Artificial Intelligence, Elsevier, vol. 170, no. 3, pp. 187-231, 2006.

In this paper, we study the partitioning of constraints in temporal plan- ning problems formulated as mixed-integer nonlinear programming (MINLP) problems. Constraint partitioning is attractive because it leads to much easier subproblems, where each is a significant relaxation of the original problem. Moreover, each subproblem is very similar to the original problem and can be solved by any existing solver with little or no modification. Constraint partitioning, however, introduces global constraints that may be violated when subproblems are evaluated independently. To reduce the over- head in resolving such global constraints, we develop in this paper new conditions and algorithms for limiting the search space to be backtracked in each subproblem. Using a penalty formulation of a MINLP where the con- straint functions of the MINLP are transformed into non-negative functions, we present a necessary and sufficient extended saddle-point condition (ES- PC) for constrained local minimization. When the penalties are larger than some thresholds, our theory shows a one-to-one correspondence between a constrained local minimum of the MINLP and an extended saddle point of the penalty function. Hence, one way to find a constrained local minimum is to increase gradually the penalties of those violated constraints and to look for a local minimum of the penalty function using any existing algorithm until a solution to the constrained model is found. Next, we extend the ESPC to constraint-partitioned MINLPs and propose a partition-and-resolve strategy for resolving violated global constraints across subproblems. Us- ing the discrete-space ASPEN and the mixed-space MIPS planners to solve subproblems, we show significant improvements on some planning benchmarks, both in terms of the quality of the plans generated and the execution times to find them.

J80
Benjamin W. Wah and Yixin Chen,
``Subgoal Partitioning and Global Search for Solving Temporal Planning Problems in Mixed Space,''
Int'l J. of Artificial Intelligence Tools, World Scientific Publishing Co. Pte. Ltd., vol. 13, no. 4, Dec. 2004, pp. 767-790.

We study in this paper the partitioning of the constraints of a temporal planning problem by subgoals, their sequential evaluation before parallelizing the actions, and the resolution of inconsistent global constraints across subgoals. Using a Lagrangian formulation and the theory of extended saddle points, we propose a global-search strategy that looks for local minima in the original-variable space of the Lagrangian function and for local maxima in the Lagrange-multiplier space. Our approach improves over a previous scheme that partitions constraints along the temporal horizon. The previous scheme leads to many global constraints that relate states in adjacent stages, which means that an incorrect assignment of states in an earlier stage of the horizon may violate a global constraint in a later stage of the horizon. To resolve the violated global constraint in this case, state changes will need to propagate sequentially through multiple stages, often leading to a search that gets stuck in an infeasible point for an extended period of time. In this paper, we propose to partition all the constraints by subgoals and to add new global constraints in order to ensure that state assignments of a subgoal are consistent with those in other subgoals. Such an approach allows the information on incorrect state assignments in one subgoal to propagate quickly to other subgoals. Using MIPS as the basic planner in a partitioned implementation, we demonstrate significant improvements in time and quality in solving some PDDL2.1 benchmark problems.

J81
Xiao Su and Benjamin W. Wah,
``Loss Aware Rate Allocations in H.263 Coded Video Transmissions,''
Journal of Circuits, Systems, and Computers, World Scientific Publishing Co. Pte. Ltd., vol. 14, no. 6, Dec. 2005. pp. 1157-1171.

For packet video, information loss and bandwidth limitation are two factors that affect video playback quality. Traditional rate allocation approaches have focused on optimizing video quality under bandwidth constraint alone. However, in the best-effort Internet, packets carrying video data are susceptible to losses, which need to be reconstructed at the receiver side. In this paper, we propose loss aware rate allocations in both group-of-block (GOB) level and macroblock level, given that certain packets are lost during transmissions and reconstructed using simple interpolation methods at the receiver side. Experimental results show that our proposed algorithms can produce videos of higher quality when sent over lossy Internet.

J82
J. W. Han, Y. X. Chen, G. Dong, J. Pei, B. W. Wah, J Wang, and Y. D. Cai,
``The Stream Cube: An Architecture for Multi-Dimensional Analysis of Data Streams,''
Distributed and Parallel Databases: Special Issue on Data Warehousing and Data Streams, Springer-Verlag, Sept. 2005.

Real-time surveillance systems, telecommunication systems, and other dynamic environments often generate tremendous (potentially infinite) volume of stream data: the volume is too huge to be scanned multiple times. Much of such data resides at rather lowlevel of abstraction, whereas most analysts are interested in relatively high-level dynamic changes (such as trends and outliers). To discover such high-level characteristics, one may need to perform on-line multi-level, multi-dimensional analytical processing of stream data. In this paper, we propose an architecture, called stream cube, to facilitate on-line, multi-dimensional, multi-level analysis of stream data. For fast online multi-dimensional analysis of stream data, three important techniques are proposed for efficient and effective computation of stream cubes. First, a tilted time frame model is proposed as a multi-resolution model to register time-related data: the more recent data are registered at finer resolution, whereas the more distant data are registered at coarser resolution. This design reduces the overall storage of time-related data and adapts nicely to the data analysis tasks commonly encountered in practice. Second, instead of materializing cuboids at all levels, we propose to maintain a small number of critical layers. Flexible analysis can be efficiently performed based on the concept of observation layer and minimal interesting layer. Third, an efficient stream data cubing algorithm is developed which computes only the layers (cuboids) along a popular path and leaves the other cuboids for query-driven, on-line computation. Based on this design methodology, stream data cube can be constructed and maintained incrementally with a reasonable amount of memory, computation cost, and query response time. This is verified by our substantial performance study. Stream data cube architecture facilitates online analytical processing of stream data. It also forms 36 a preliminary data structure for online stream data mining. The impact of the design and implementation of stream data cube in the context of stream data mining is also discussed in the paper.

J83
Y. X. Chen, C. W. Hsu, and B. W. Wah,
``Temporal Planning using Subgoal Partitioning and Resolution in SGPlan,''
J. of Artificial Intelligence Research, AAAI Press, vol. 26, Aug. 2006, pp.323-369.

In this paper, we present the partitioning of mutual-exclusion (mutex) constraints in temporal planning problems and its implementation in the SGPlan planner. Based on the strong locality of mutex constraints observed in many benchmarks of the Fourth International Planning Competition (IPC4), we propose to partition the constraints of a planning problem into groups based on their subgoals. Constraint partitioning leads to significantly easier subproblems that are similar to the original problem and that can be efficiently solved by the same planner with some modifications to its objective function. We present a partition-and-resolve strategy that looks for locally optimal subplans in constraint-partitioned temporal planning subproblems and that resolves those inconsistent global constraints across the subproblems. We also discuss some implementation details of SGPlan, which include the resolution of violated global constraints, techniques for handling producible resources, landmark analysis, path finding and optimization, search-space reduction, and modifications of Metric-FF when used as a basic planner in SGPlan. Last, we show results on the sensitivity of each of these techniques in quality-time trade-offs and experimentally demonstrate that SGPlan is effective for solving the IPC3 and IPC4 benchmarks.

J84
B. W. Wah and D. Xin,
``Optimization of Bounds in Temporal Flexible Plans with Dynamic Controllability,''
Int'l J. of Artificial Intelligence Tools, World Scientific Publishing Co. Pte. Ltd., Feb. 2007, vol. 16, no. 1, pp. 17-44.

A temporal flexible planning problem that involves contingent and requirement events can be formulated as a simple temporal network with uncertainty (STNU). An STNU is controllable when there is a strategy for executing the requirement events (or actions) in such a way that all the conditions involving contingent events can be satisfied in all situations. The most interesting and useful controllability property is dynamic controllability in which the remaining actions in an STNU can always be scheduled under all possible feasible durations of future contingent events when all the past contingent events are known. In this paper, we propose and study a novel problem of assigning bounds on the duration of each requirement link in order for the resulting STNU to be dynamically controllable and to minimize the total cost over the allowed durations of all requirement links. We first prove the NP hardness of the problem with a linear cost function. We then formulate the dynamic controllability of an STNU as the constraints in a nonlinear optimization problem. Finally, we present methods for reducing the number of constraints in order to make the problem tractable and to demonstrate the computational performance of our methods.

J85
Y. X. Chen, G. Dong, J. W. Han, J. Pei, B. W. Wah, and J. Y. Wang,
``Regression Cubes with Lossless Compression and Aggregation,''
IEEE Trans. on Knowledge and Data Engineering, vol. 18, no. 12, Dec. 2006, pp. 1585-1599.

As OLAP engines being widely used nowadays to support multidimensional data analysis and intelligent decision making, it is desirable to support in data cubes advanced statistical measures, such as regression and filtering measures, in addition to the traditional simple measures such as count and average. Such advanced measures will allow users to model, smooth, and predict the trends and patterns of data. Although raw data is generally collected at the lowest level, an analyst is often more interested in generating higher and multiple levels of abstraction without accessing the raw data. Efficient computation of aggregated statistical measures in multidimensional spaces is a challenging problem as existing aggregation algorithms for simple distributive and algebraic measures are inadequate for such computation.

In this paper, we propose the concept of regression cubes in which regression models can be obtained for any data cuboid efficiently without accessing the raw data. The traditional classification of measures into distributive, algebraic, and holistic measures cannot accommodate advanced statistical models such as regression and filters. In this paper, we propose a fundamentally new class of measures, compressible measures, in order to support efficient computation of the statistical models. For compressible measures, we can efficiently compute the statistical models at an arbitrary data cell, while each data cell is compressed into a small auxiliary matrix with a fixed size independent of the number of tuples. We can then compute the statistical measures for any data cell from the compressed data of the lower-level cells without accessing the raw data. Time- and space-efficient lossless aggregation formulae are derived for regression and filtering measures. Our analytical and experimental studies show that the proposed approach substantially reduces the memory usage and the overall response time for statistical analysis of multidimensional data.

J86
C. W. Hsu, Y. X. Chen, and B. W. Wah,
``Subgoal Ordering and Granularity Control for Incremental Planning,''
Int'l J. of Artificial Intelligence Tools, World Scientific Publishing Co. Pte. Ltd., vol. 16, no. 4, Aug. 2007, pp. 707-723.

In this paper, we study strategies in incremental planning for ordering and grouping subproblems partitioned by the subgoals of a planning problem. To generate a rich set of partial orders for ordering subproblems, we propose an algorithm based on a relaxed plan that ignores the delete lists. The new algorithm considers both the initial and the goal states and can effectively order subgoals in such a way that greatly reduces the number of invalidations during incremental planning. We have also considered trade-offs between the granularity of the subgoal sets and the complexity of solving the overall planning problem. We propose an efficient strategy for dynamically adjusting the grain size in partitioning in order to minimize the total complexity. We further evaluate a redundant-ordering scheme that uses two different subgoal orders to improve the solution quality, without greatly sacrificing run-time efficiency. Experimental results on using Metric-FF, YAHSP, and LPG-TD-speed as the embedded planners in incremental planning show that our strategies are general for improving the time and quality of these planners across various benchmarks. Finally, we compare the performance of the three planners, the incremental versions using these planners as embedded planners, and SGPlan4.1.

J87
D. Xin, J. W. Han, X Li, Z. Shao, and B. W. Wah ``Computing Iceberg Cubes by Top-Down and Bottom-Up Integration: The StarCubing Approach,''
IEEE Trans. on Knowledge and Data Engineering, vol. 19, no. 1, Jan. 2007, pp. 111-126.

Data cube computation is one of the most essential but expensive operations in data warehousing. Previous studies have developed two major approaches, top-down versus bottom-up. The former, represented by the MultiWay Array Cube (called the MultiWay) algorithm, aggregates simultaneously on multiple dimensions; however, it cannot take advantage of a priori pruning when computing iceberg cubes (cubes that contain only aggregate cells whose measure values satisfy a threshold, called the iceberg condition). The latter, represented by BUC, computes the iceberg cube bottom-up and facilitates a priori pruning. BUC explores fast sorting and partitioning techniques; however, it does not fully explore multidimensional simultaneous aggregation. In this paper, we present a new method, Star-Cubing, that integrates the strengths of the previous two algorithms and performs aggregations on multiple dimensions simultaneously. It utilizes a star-tree structure, extends the simultaneous aggregation methods, and enables the pruning of the group-bys that do not satisfy the iceberg condition. Our performance study shows that Star-Cubing is highly efficient and outperforms the previous methods.

J88
B. W. Wah and M. L. Qian,
``Constrained Formulations and Algorithms for Predicting Stock Prices by Recurrent FIR Neural Networks,''
Int'l J. of Information Technology and Decision Making, World Scientific Pub. Co. vol. 5, no. 4, Dec. 2006, pp. 638-658.

In this paper, we develop a new constrained artificial-neural-network (ANN) formulation and the associated learning algorithm for predicting stock prices, a difficult time-series prediction problem. We characterize daily stock prices as a noisy non-stationary time series and identify its predictable low-frequency components. Using a recurrent FIR (finite-impulse-response) ANN, we formulate the learning problem as a constrained optimization problem, develop constraints for incorporating cross validations, and solve the learning problem using algorithms based on the theory of extended saddle points for nonlinear constrained optimization. Finally, we illustrate our prediction results on ten stock-price time series. Our main contributions in this paper are the channel-specific low-pass filtering of noisy time series obtained by wavelet decomposition, the transformation of the low-pass signals to improve their stationarity, and the incorporation of constraints on cross validation that can improve the accuracy of predictions. Our experimental results demonstrate good prediction accuracy and annual returns.

J89
B. W. Wah, Y. X. Chen, and T. Wang,
``Simulated Annealing with Asymptotic Convergence for Nonlinear Constrained Optimization,''
Journal of Global Optimization, Springer-Verlag, vol. 39, pp. 1-37, 2007.

In this paper, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of nonlinear constrained optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence between a constrained local minimum and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling probabilistic descents in the problem-variable subspace of the penalty function and probabilistic ascents in the penalty subspace. Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we present constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity as compared to that of CSA by partitioning the constraints of a problem into significantly simpler subproblems, solving each independently, and resolving those violated global constraints across the subproblems. We prove that both CSA and CPSA asymptotically converge to a constrained global minimum with probability one in discrete optimization problems. The result extends conventional simulated annealing (SA), which guarantees asymptotic convergence in discrete unconstrained optimization, to that in discrete constrained optimization. Moreover, it establishes the condition under which optimal solutions can be found in constraint-partitioned nonlinear optimization problems. Finally, we evaluate CSA and CPSA by applying them to solve some continuous constrained optimization benchmarks and compare their performance to that of other penalty methods.

J90
B. Sat and B. W. Wah,
``Analyzing Voice Quality in Popular VoIP Applications,''
IEEE Multimedia Magazine, vol. 16, Jan-Mar 2009, pp. 46-58.

This paper presents a general method for comparing the conversational quality of speech communication systems over networks with delays and its application in studying four VoIP systems. The perceptual evaluation of an interactive conversation depends on the quality and the latency of the one-way speech segments received. Due to path-dependent and non-stationary conditions in the Internet, the delays incurred when a conversation switches from one client to another are asymmetric and may lead to a degraded perceptual quality as compared to that of a face-to-face setting. The conversational dynamics can further be disrupted by variations in the one-way speech quality and latency. As those factors that affect conversational quality may be counteracting to each other, trade-offs must be made among them. Currently available objective and subjective standards, however, do not adequately capture these trade-offs, and there are no good methods for evaluating the conversational quality of VoIP systems. In this paper, we present a method for studying these trade-offs under repeatable network and conversational conditions. When applied on four commonly used VoIP systems, our results show that each achieves some trade-offs, but none attains the best quality under all conditions. Lastly, we discuss a systematic approach for predicting user preference between two VoIP systems in terms of objective measures that can be easily acquired.

J91
B. W. Wah and B. Sat,
``The Design of VoIP Systems With High Perceptual Conversational Quality,''
Journal of Multimedia, Special Issue on Advances in Interactive Digital Entertainment Technologies, vol. 4, no. 2, 2009, pp. 49-62.

This paper describes our work on real-time two-party and multi-party VoIP (voice-over-IP) systems that can achieve high perceptual conversational quality. It focuses on the fundamental understanding of conversational quality and its trade-offs among the design of speech codecs and strategies for network control, playout scheduling, and loss concealments. We have studied three key aspects that address the limitations of existing work and improve the perceptual quality of VoIP systems. Firstly, we have developed a statistical approach based on just-noticeable difference (JND) to significantly reduce the large number of subjective tests, as well as a classification method to automatically learn and generalize the results to unseen conditions. Using network and conversational conditions measured at run time, the classifier learned helps adjust the control algorithms in achieving high perceptual conversational quality. Secondly, we have designed a cross-layer speech codec to interface with the loss-concealment and playout scheduling algorithms in the packet-stream layer in order to be more robust and effective against packet losses. Thirdly, we have developed a distributed algorithm for equalizing mutual silences and an overlay network for multi-party VoIP systems. The approach leads to multi-party conversations with high listening only speech quality and balanced mutual silences.

J92
B. W. Wah and B. Sat,
``Statistical Scheduling of Offline Comparative Subjective Evaluations for Real-Time Multimedia,''
IEEE Trans. on Multimedia, Vol. 11, no. 6, Oct. 2009, pp 1114-1130.

In this paper, we study the statistical scheduling of offline subjective tests for evaluating alternative control schemes in real-time multimedia applications. These applications are characterized by multiple counteracting objective quality metrics (such as delay and signal quality) that can be affected by various control schemes. However, the trade-offs among these metrics with respect to the subjective preferences of users are not defined. As a result, it is difficult to select the proper control schemes that lead to the best subjective quality at run time. Since subjective tests are expensive to conduct and the number of possible control schemes and run-time conditions is prohibitively large, it is important that a minimum number of such tests be conducted offline, and that the results learned can be generalized to unseen conditions with statistical confidence. To this end, we study in this paper efficient algorithms for scheduling a sequence of subjective tests, while leaving the generalization of limited offline subjective tests to guide the operation of the control schemes at run time to a future paper. Using Monte-Carlo simulations, we verify the robustness of our algorithms in terms of their accuracy and efficiency.

J93
Q. Li, W. H. Lau, E. W. C. Leung, F. Li, V. Lee, B. W. Wah, and H. Ashman,
``Emerging Internet Technologies for E-Learning (Guest Editors' Introduction),
IEEE Internet Computing, July/August 2009, pp. 11-17.

The electronic learning (or e-learning) industry, including Webbased learning, has been booming worldwide in recent years, due to factors such as reduced environmental costs and impact, quality education made affordable, and convenience and flexibility to learners. The Internet's ubiquity is promoting new sophisticated techniques and powerful applications for a new paradigm in e-learning that uses the Web as an interface for single-user as well as collaborative learning. Webbased learning and new concepts such as virtual classrooms, laboratories, and universities introduce many new issues. The articles in this special issue of IEEE Internet Computing highlight three efforts in creating Internet-based e-learning technologies that tackle challenges such as management of learning objects in an open and scalable architecture, incorporation of learners' pedagogical features in Webbased learning environments, and digital game-based learning.

J94
R. W. H. Lau, N. Y. Yen, F. Li, B. W. Wah,
``Recent development in multimedia e-learning technologies,
World Wide Web, vol 17, no. 2, Springer, pp. 189-198.

Multimedia and networking technologies have significantly impacted on our daily activities, particularly in terms of how we learn. Nowadays, classroom teaching no longer simply relies on chalk and blackboard as the prime medium for course dissemination. E-learning technologies have made it possible to provide a virtual classroom environment on the Web through supporting teacher-student and student-student communications, course material distribution as well as online student assessments. They provide students with more control over their learning schedule and pace. On top of this, multimedia technologies further offer students different forms of media to match their learning styles, leading to enhancements of their learning effectiveness. This extended introduction discusses the latest e-learning specific multimedia technologies, their research challenges and future trends from both pedagogical and technological perspectives. We also summarize the papers included in this special issue.

J95
H. Hu, R. W. H. Lau, H. Hu, and B. W. Wah,
``On View Consistency in Multi-Server Distributed Virtual Environments,''
IEEE Trans. on Visualization and Computer Graphics, vol. 20, no. 10, Oct. 2014, pp. 1428-1440.

A distributed virtual environment (DVE) is a shared virtual environment (VE) that allows remote users to interact with each other through networks. DVEs are becoming very popular due to some prominent applications, such as online games and virtual worlds. To support a large number of users, a multi-server DVE architecture may be adopted, with each server managing a subset of users. However, there are two critical problems with this architecture: view inconsistency caused by delays and server overloading caused by uneven distribution of users. While the first problem affects users' perception of the VE and causes user disputes, the second problem affects the system response time. In this paper, we first show that the view inconsistency problem and the load balancing problem are conflicting objectives. We then propose an efficient joint optimization framework to address both problems. Our results show that the proposed method can improve the view inconsistency problem significantly, which is important to the interactivity of DVE applications.

J96
X. Jin, B. W. Wah, X. Cheng, and Y. Wang,
``Significance and Challenges of Big Data Research,'' Big Data Research, vol. 2, Elsevier, March 2015, pp. 59-64.

In recent years, the rapid development of Internet, Internet of Things, and Cloud Computing have led to the explosive growth of data in almost every industry and business area. Big data has rapidly developed into a hot topic that attracts extensive attention from academia, industry, and governments around the world. In this position paper, we first briefly introduce the concept of big data, including its definition, features, and value. We then identify from different perspectives the significance and opportunities that big data brings to us. Next, we present representative big data initiatives all over the world. We describe the grand challenges (namely, data complexity, computational complexity, and system complexity), as well as possible solutions to address these challenges. Finally, we conclude the paper by presenting several suggestions on carrying out big data projects.

J98
J. X. Xu and B. W. Wah,
``Optimizing the Perceptual Quality of Real-Time Multimedia Applications,'' IEEE Multimedia, vol. 22, no. 4, Oct.-Dec. 2015, pp. 14-28.

Designing a multimedia system to achieve high perceptual quality requires a black-box approach to tune control inputs according to preferences from subjective tests. The authors present OptPQ, a systematic method for optimizing perceptual quality using just-noticeable difference (JND) profiles.

J99
B. W. Wah and M. M. Y. Chang
``Designing World Class Research for Sociatal Impact,'' The Academic Executive Brief, Elsevier, vol. 6, 2016.

The authors explore approaches considered to understand their CUHK's impact on society, including reputation surveys, altmetrics and bibliometrics. They suggest that deep analysis of citations may be more meaningful than the current dependence on number of citations as a proxy for a paper's quality.

J100
J. X. Xu and B. W. Wah,
``Optimality of Greedy Algorithm for Generating Just-Noticeable Difference Surfaces,'' IEEE Trans. on Multimedia, vol. 18, no. 7, July 2016, pp. 1330-1337.

Tuning multimedia applications at run time to achieve high perceptual quality entails the search of nonlinear mappings that determine how control inputs should be set in order to lead to high user-perceived quality. Offline subjective tests are often used for this purpose but they are expensive to conduct because each can only evaluate one mapping at a time and there can be infinitely many such mappings to be evaluated. In this paper, we present a greedy algorithm that uses a small number of subjective test results to accurately approximate this space of mappings. Based on an axiom on monotonicity and the property of just-noticeable differences, we prove its optimality in minimizing the average absolute error between the approximate and the original mappings. We further demonstrate the results using numerical simulations and the application of the mappings found to tune the control of the multimedia game BZFlag.

J101
J. X. Xu and B. W. Wah,
``Consistent Synchronization of Action Order with Least Noticeable Delays in Online Games,'' ACM Trans. on Multimedia Computing, Communications, and Applications (TOMM), vol. 13, no. 1, Jan. 2017.

When running multi-player online games on IP networks with losses and delays, the order of actions may be changed when compared to the order run on an ideal network with no delays and losses. To maintain a proper ordering of events, traditional approaches either use rollbacks to undo certain actions, or use local lags to introduce additional delays. Both may be perceived by players because their changes are beyond the just-noticeable-difference (JND) threshold. In this paper we propose a novel method for ensuring a strongly consistent completion order of actions, where strong consistency refers to the same completion order as well as the same interval between any completion time and the corresponding ideal reference completion time under no network delay. We find that small adjustments within the JND on the duration of an action would not be perceivable, as long as the duration is comparable to the network round-trip time (RTT). We utilize this property to control the vector of durations of actions and formulate the search of the vector as a multi-dimensional optimization problem. By using the property that players are generally more sensitive to the most prominent delay effect (with the highest probability of noticeability, or the probability of correctly noticing a change when compared to the reference), we prove that the optimal solution occurs when the probabilities of noticeability, of the individual adjustments are equal. As this search can be done efficiently in polynomial time (around 5 ms) with a small amount of space (around 160 KB), the search can be done at run time to determine the optimal control. Lastly, we evaluate our approach on a popular open-source online shooting game BZFlag.

J102
F. H. Zhang and B. W. Wah,
``Fundamental Principles on Learning New Features for Effective Dense Matching,'' IEEE Trans. on Image Processing (TIP), vol. 27, no. 2, Feb. 2018, pp. 822-836.

In dense matching (including stereo matching and optical flow), nearly all existing approaches are based on simple features, such as gray or RGB color, gradient or simple transformations like census, to calculate matching costs. These features do not perform well in complex scenes that may involve radiometric changes, noises, overexposure and/or textureless regions. Various problems may appear, such as wrong matching at the pixel or region level, flattening/breaking of edges and/or even entire structural collapse. In this paper, we propose two fundamental principles based on the consistency and the distinctiveness of features. We show that almost all existing problems in dense matching are caused by features that violate one or both of these principles. To systematically learn good features for dense matching, we develop a general multi-objective optimization based on these two principles and apply convolutional neural networks (CNNs) to find new features that lie on the Pareto frontier. By using two-frame optical flow and stereo matching as applications, our experimental results show that the features learned can significantly improve the performance of state-of-theart approaches. Based on the KITTI benchmarks, our method ranks first on the two stereo benchmarks and is the best among existing two-frame optical-flow algorithms on flow benchmarks.

J103
G. J. Hwang, H. Xie, B. W. Wah, and D. Gasevic,
``Vision, Challenges, Roles and Research Issues of Artificial Intelligence in Education,'' Computers and Education: Artificial Intelligence, vol. 1, 2020, pp. 1-5.

The rapid advancement of computing technologies has facilitated the implementation of AIED (Artificial Intelligence in Education) applications. AIED refers to the use of AI (Artificial Intelligence) technologies or application programs in educational settings to facilitate teaching, learning, or decision making. With the help of AI technologies, which simulate human intelligence to make inferences, judgments, or predictions, computer systems can provide personalized guidance, supports, or feedback to students as well as assisting teachers or policymakers in making decisions. Although AIED has been identified as the primary research focus in the field of computers and education, the interdisciplinary nature of AIED presents a unique challenge for researchers with different disciplinary backgrounds. In this paper, we present the definition and roles of AIED studies from the perspective of educational needs. We propose a framework to show the considerations of implementing AIED in different learning and teaching settings. The structure can help guide researchers with both computers and education backgrounds in conducting AIED studies. We outline 10 potential research topics in AIED that are of particular interest to this journal. Finally, we describe the type of articles we like to solicit and the management of the submissions.

J104
B. W. Wah,
``Using Kernels to Harness the Complexity of Big Data Applications,'' Int' Journal on Artificial Intelligence Tools, vol. 31, no. 3, 2022, pp. 1-11.

This paper presents the concept of kernels to address the complexity of solving big-data applications. The solution strategies in these applications often require evaluating alternative subspaces on the big data and selecting the best solution. As the data space in these problems is so vast that it is infeasible to scan the data once, we need domain-specific methods for identifying a promising and manageable subspace with a high density of solutions before looking for individual solutions. To this end, we introduce kernels to represent some properties of the statistical quality, average density, or probability of solutions in a subspace to prune subspaces with suboptimal kernels. We classify various past approaches based on their analysis methods and illustrate each by an example. Our results confirm that kernels can effectively harness the complexity of solving big-data applications.

J105
B. W. Wah,
``Dominance Pruning in Machine Learning for Solving Financial Trading and Real-Time Multimedia Applications,'' American Journal of Artificial Intelligence, Science Publishing Group, vol. 6, no. 2, 2022, pp. 36-47.

This paper presents the design of dominance relations to reduce the space traversed in machine learning for solving two applications in financial trading and real-time multimedia. A machine-learning algorithm designed for an application with a huge search space will need to perform an efficient traversal of the space during learning. It will be more effective if it employs a powerful pruning mechanism to eliminate suboptimal candidates before using them in the learning algorithm. In our approach, we present dominance relations for pruning subspaces with suboptimal kernels that are otherwise evaluated in learning, where kernels represent the statistical quality, average density, or probability of solutions in a subspace. Specifically, when one subspace dominates another by a dominance relation, we can prune the latter and guarantee without searching both that the kernel of the latter cannot be better than that of the first. As a result, a significant portion of the search space will be pruned by those nondominated subspaces during learning. In the financial trading application studied, we use mean reversion as our strategy for learning the set of promising stocks and Pareto-optimality as our dominance relation to reduce the space evaluated in learning. In the multimedia application, we propose a dominance relation using an axiom from our past work to approximate the subspace of perceptual qualities within an error threshold. The pruning mechanism allows the learning of the mapping from controls to perceptual qualities while eliminating the evaluation of all those mappings that are within the error thresholds. In both cases, we can harness the complexity of machine learning by reducing the candidate space evaluated.

J106
B. W. Wah and J. X. Xu,
``Optimizing Multi-Dimensional Perceptual Quality in Online Interactive Multimedia,'' IEEE Multimedia, IEEE, vol. 30, no. 3, July-Sept. 2023, pp. 119-128.

Network latencies and losses in online interactive multimedia applications may lead to a degraded perception of quality, such as lower interactivity or sluggish responses. We can measure these degradations in perceptual quality by the just-noticeable difference, awareness, or probability of noticeability (pnotice); the latter measures the likelihood that subjects can notice a change from a reference to a modified reference. In our previous work, we developed an efficient method for finding the perceptual quality for one metric under simplex control. However, integrating the perceptual qualities of several metrics is a heuristic. In this paper, we present a formal approach to optimally combine the perceptual quality of multiple metrics into a joint measure that shows their tradeoffs. Our result shows that the optimal balance occurs when the $\pnotice$ of all the component metrics are equal. Furthermore, our approach leads to an algorithm with a linear (instead of combinatorial) complexity of the number of metrics. Finally, we present the application of our method in two case studies, one on VoIP for finding the optimal operating points and the second on fast-action games to hide network delays while maintaining the consistency of action orders.

CO71
Lon-Chan Chu and Benjamin W. Wah, ``Optimization in Real Time,'' Proc. IEEE Real Time Systems Symposium, Nov. 1991, pp. 150-159.

In this paper, we present a search algorithm called real-time search for solving combinatorial optimization problems under real-time constraints. The problems we study are NP-hard and have good heuristic algorithms for generating feasible solutions. The algorithm we develop aims at finding the best possible solution in a given deadline. Since this objective is generally not achievable without first solving the problem, we use an alternative heuristic objective that looks for the solution with the best ascertained approximation degree. Our algorithm schedules a sequence of guided depth-first searches, each searching for a more accurate solution (based on the approximation degree set), or solutions deeper in the search tree (based on the threshold set), or a combination of both. Five versions of the RTS algorithm for setting approximation degrees and/or thresholds are formulated, evaluated, and analyzed. We develop an exponential model for characterizing the relationship between the approximation degrees achieved and the time taken. Our results show that our RTS algorithm finds solutions better than a guided depth-first search, and solutions found are very close to the best possible solution that can be found in the time allowed. We evaluate our results using the symmetric traveling-salesman, the knapsack, and the production-planning problems.

CO76
Kumar Ganapathy and Benjamin Wah, ``Optimal Design of Processor Arrays for Uniform Recurrences,'' Proc. Int'l Conf. on Application-Specific Array Processors, IEEE Computer Society, Aug. 1992, pp. 636-648.

In this paper we present a parameter-based approach for synthesizing systolic architectures from uniform recurrence equations. The scheme presented is a generalization of the Parameter Method proposed by Li and Wah. The approach synthesizes optimal arrays of any lower dimension from a general uniform recurrence description of the problem. In other previous attempts for mapping uniform recurrences into lower-dimensional arrays, optimality of the resulting designs is not guaranteed. As an illustration of the technique, optimal linear arrays for matrix multiplication are given. Due to space limitation, proofs to theorems in this paper are not shown. In a companion paper, we present details of the proof of Theorems 1 and 3 and a different approach for defining the constraints shown in Theorem 2. A detailed design for solving path-finding problems is also presented there.

CO77
Arthur Ieumwananonthachai and Benjamin W. Wah, ``Parallel Statistical Selection in Multiprocessors,'' Proc. Int'l Conf. on Parallel Processing, Pennsylvania State University Press, Aug. 1992, vol. III, pp. 190-194.

In this paper, we develop parallel multistage selection strategies for guiding the concurrent evaluation of candidates from a large pool of candidates, with the objective of finding the candidate with the maximum (resp. minimum) value in a limited amount of time. We present a statistical model for finding the appropriate number of candidates to be sampled and when each should be sampled. We verify the analytical results by simulations and evaluate the performance of strategies that are analytically intractable. Our performance results show that there is linear speedup in parallel processing over selection done sequentially. Results obtained from application of our selection method to learn heuristics for a real-world application demonstrate that heuristics with improved performance can be found by a systematic search of the space of feasible heuristics in limited time. Due to space limitation, we are unable to present the details of the experiments and the exact heuristics found.

CO78
Kumar Ganapathy and Benjamin Wah, ``Synthesizing Optimal Lower Dimensional Processor Arrays,'' Proc. Int'l Conf. on Parallel Processing, CRC Press, vol. 3 (Aug. 1992), pp. 96-103.

Most existing methods for synthesizing systolic architectures can only map n-dimensional recurrences to n-1-dimensional arrays. In this paper, we generalize the parameter-based approach of Li and Wah to map n-dimensional uniform recurrences to any k-dimensional processor arrays, where k is less than n. In our approach, operations of the target array are captured by a set of parameters, and constraints are derived to avoid computational conflicts and data collisions. We show that the optimal array for any objective function expressed in terms of these parameters can be found by a systematic enumeration over a polynomial search space. In contrast, previous attempts do not guarantee the optimality of the resulting designs. We illustrate our method with optimal single-pass linear arrays for re-indexed Warshall-Floyd path-finding algorithm. Finally, we show the application of GPM to practical situations characterized by restriction on resources, such as processors or completion time, rather than a well-defined objective.

CO80
Lon-Chan Chu and Benjamin W. Wah, ``Band Search: An Efficient Alternative to Guided Depth-first Search,'' Proc. 4th IEEE Int'l Conf. on Tools with Artificial Intelligence (Nov. 1992), pp. 154-161.

In this paper, we propose a novel search algorithm called band search that generalizes guided depth-first and best-first searches. The search allocates a band of at most W nodes in each level of the search tree for storing active nodes in that level, D priority lists, one for each level, for storing overflow nodes, and D counters, one for each level, for keeping track of and to limit the degree of backtracking allowed. The algorithm has three major features: a) it selects for expansion in a best-first fashion from all nodes in the bands, b) it moves nodes from an overflow list into the corresponding band in a depth-first fashion, and c) it restricts backtracking so that at most W nodes in a level are fully searched before allowing new nodes in this level into the band. The algorithm specializes to be a guided depth-first search (GDFS) when W is one, and a best-first search (BFS) when W is unlimited. By choosing W appropriately, we show empirically that the algorithm can often outperform GDFS and has performance close to that of BFS. Moreover, the algorithm can be used instead of GDFS in iterative searches, such as IDA*, MIDA*, and DFS*. Finally, we identify conditions upon which the algorithm behaves like BFS, and the anomalous behavior when the algorithm performs worse than GDFS.

CO81
Steven R. Schwartz and Benjamin W. Wah, ``Automated Parameter Tuning in Stereo Vision under Time Constraints,'' Proc. 4th Int'l Conf. on Tools with Artificial Intelligence (Nov. 1992), pp. 162-169.

This paper presents a method for tuning parameters under a fixed time constraint for a general binocular stereo-vision algorithm. A major difficulty in stereo vision, as well as in other vision algorithms, lies in adjusting the large variety of parameters for maximizing performance. This effort is usually performed by human experts with a minimum of formal guidelines. To automate this process, we develop Teacher 4.2, a generate-and-test system that systematically generates new parameter values by analyzing the results of previous tests, and that performs limited and controlled tests on the candidates generated using high-speed computers. The system is modeled as a statistical selection problem operating under a given time constraint. It divides the time allowed into stages, where promising parameter-value sets found in one stage are passed to the next stage for further testing, and selects the parameter-value set deemed best by the final stage as the result. We show experimentally that our system can find new parameter-value sets which in some cases are better than the ones originally found by extensive hand-tuning and commonly used heuristics. Our experiments further show that different parameter values may be required under different objectives and performance constraints. Our system provides an automated tool for generating new parameters that can be part of an adaptive stereo-vision system, capable of adapting to changing algorithm specifications as well as changing goals and target domains.

CO85
M. Aboelaze, D. Lee, and B. W. Wah, ``Two-Dimensional Digital Filtering Using Constant-I/O Systolic Arrays,'' Proc. IEEE Int'l Symposium on Circuits and Systems (March 1993), Chicago, IL, pp. 255-258.

We present in this paper systolic arrays with constant number of input/output (I/O) ports for two-dimensional (2-D) FIR and IIR filtering. Our design has an array of LxN processing elements (PE's), where L (less than or equal to N) is a technology-dependent parameter related to the number of I/O ports. Each PE in our design has a microprogrammed arithmetic logic unit (ALU), a control unit, a fixed number of I/O buffers, and O(N/L) memory. Our design specializes to a square mesh when L=N, and a linear array when L=1. It can implement both FIR and IIR filtering in O(N2M/L) time which is asymptotically optimal.

CO86
Pankaj Mehra and Benjamin Wah, ``Automated Learning of Workload Measures for Load Balancing on a Distributed System,'' Proc. Int'l Conference on Parallel Processing, CRC Press, Aug. 1993, pp. III-263-III-270.

Load-balancing systems use workload indices to dynamically schedule jobs. We present a novel method of automatically learning such indices. Our approach uses comparator neural networks, one per site, which learn to predict the relative speedup of an incoming job using only the resource-utilization patterns observed prior to the job's arrival. Our load indices combine information from the key resources of contention: CPU, disk, network, and memory. Our learning algorithm overcomes the lack of job-specific information by learning to compare the relative speedups of different sites with respect to the same job, rather than attempting to predict absolute speedups. We present conditions under which such learning is viable. Our results show that indices learnt using comparator networks correctly pick the best destination in most cases when incoming jobs are short; accuracy degrades as execution time increases.

CO87
Wei-jia Shang and Benjamin W. Wah, ``Dependence Analysis and Architecture Design for Bit-Level Algorithms,'' Proc. Int'l Conference on Parallel Processing, CRC Press, Aug. 1993, pp. I-30-I-38.

In designing application-specific bit-level architectures and in programming existing bit-level processor arrays, it is necessary to expand a word-level algorithm into its bit-level form before dependence analysis can be performed. In this paper, we consider dependence structures of bit-level algorithms as functions of three components -- dependence structures of word-level algorithms, dependence structures of the arithmetic algorithms implementing word-wise operations, and algorithm expansions. Based on these components, we can derive dependence structures of bit-level algorithms without using time consuming general dependence analysis methods. To illustrate our approach, we derive two dependence structures for bit-level matrix multiplication and apply a method developed earlier to design two bit-level architectures. One of these architectures is O(p) times faster than the best word-level architecture, where p is the word length. The speedup we found here is true in general because a bit in a bit-level architecture goes to the next processor for processing as soon as it is available.

CO88
A. Aizawa and B. W. Wah, ``Scheduling of Genetic Algorithms in a Noisy Environment,'' Proc. Int'l Conf. on Genetic Algorithms (July 1993), Int'l Soc. for Genetic Algorithms, pp. 48-55.

In this paper, we present efficient algorithms for adjusting configuration parameters of genetic algorithms that operate in a noisy environment. Assuming that the population size is given, we address two problems specifically important in a noisy environment. First, we study the duration-sizing problem that determines dynamically the duration of each generation. Next, we study the sample-allocation (sizing) problem that determines adaptively the number of evaluations taken from each population in a generation. For these two problems, we model the search process as a statistical selection process and derive equations useful for controlling the duration and the sample sizes. Our result shows that these adaptive procedures improve the performance of genetic algorithms over those of commonly used static ones.

CO89
Vijay Karamcheti and Benjamin W. Wah, ``Scheduling of Dynamic Divide-and-Conquer Computations on Multicomputers,'' Proc. Computers and Software Applications Conference (Nov. 1993), IEEE Computer Society Press, pp. 352-359.

The scheduling of tasks for applications with dynamic behavior traditionally rely on externally observable metrics such as the number of active processes. This paper presents a new scheduling strategy based on the observation that it may be possible to capture the near-term resource requirements of active tasks by expressions involving task parameters. Run-time evaluation of these expressions yields estimates of task behavior that are valid over a short, future interval of time. The heuristics proposed, which when used in conjunction with information supplied by profiling, can be used to annotate the source program with such expressions. Preliminary simulation results show that the use of near-future estimates in a dynamic scheduling strategy for divide-and-conquer algorithms consistently improves over traditional dynamic strategies. The performance of this strategy approaches that of the best-known deterministic strategy while incurring an overhead of the same order as other dynamic strategies.

CO90
Chin-Chi Teng and Benjamin W. Wah, ``Improvement of Supervised Learning by Linear Mapping,'' Proc. IEEE Int'l Joint Conf. on Neural Networks (October 1993), pp. 1697-1700.

In this paper, we propose a new supervised learning method to improve the learning speed for feed-forward neural networks. Our method is different from traditional supervised learning methods (such as back-propagation) that find a one-to-one mapping between the given input pattern matrix and the desired output pattern matrix. Instead, it finds one of the one-to-many mappings between the input matrix and an intermediate output matrix, and transforms the intermediate output matrix to the desired output matrix in one step using linear mapping techniques. Learning is faster with our method because there exist many intermediate output matrices, and learning can stop whenever one such matrix is found. Our extensive experimental results show that our learning algorithm converges to within a reasonable range of error after a few training epochs, making it suitable for dynamic real-time applications in which the network may need to be re-trained periodically.

CO91
Pankaj Mehra and Benjamin Wah, ``Population-Based Learning of Load Balancing Policies for a Distributed Computer System,'' Proc. of Computing in Aerospace 9 Conference (Oct. 19-21, 1993), AIAA, pp. 1120-1130.

Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. In this paper, we present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our learning algorithm overcomes the lack of task-specific information by learning to compare the relative speedups of different sites with respect to the same site, rather than attempting to predict absolute speedups. We present experimental results based on the execution of benchmark programs on a network of Sun workstations connected by a local area network. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.

CO92
Kumar Ganapathy and Benjamin Wah, ``Designing a Coprocessor for Recurrent Computations,'' Proc. 5th IEEE Symposium on Parallel and Distributed Processing (December 1-4, 1993), pp. 806-813.

In this paper, we present the design of an application-specific coprocessor for algorithms that can be modeled as uniform recurrences or ``uniformized'' affine recurrences. The coprocessor has a regular array of processors connected to an access-unit for intermediate storage of data. The distinguishing feature of our approach is that we assume the coprocessor to be interfaced to a standard, slow (single-ported) memory with low bandwidth. Hence, good performance is achieved by effectively exploiting data locality in the applications by the compiler, and the final architecture is chosen by a tradeoff analysis driven by the mapping process. Preliminary results indicate that the coprocessor has significantly lower clock rates or higher performance than that of existing RISC processors and is cost-effective for executing loop computations.

CO93
H. Kitan, W. V. Hahn, L. Hunter, R. Oka, B. W. Wah, and T. Yokoi, ``Grand Challenge AI Applications,'' Proc. 13th Int'l Joint Conf. on Artificial Intelligence (San Mateo, CA, Aug. 1993), Morgan Kaufman Pub., pp. 1677-1683.

This paper presents the position statements of the panelists in a panel discussion at the 13th International Joint Conference on Artificial Intelligence.

CO94
Chin-Chi Teng and Benjamin Wah, ``Mixed-Mode Learning: A Method for Reducing the Number of Hidden Units in Cascade Correlation,'' Proc. Int'l Symposium on Artificial Neural Networks (National Chiao Tung University, Hsinchu, Taiwan, ROC, Dec. 1-4, 1993), pp. I-101--I-107.

In this paper, we present a new learning mechanism called mixed-mode learning. This learning mechanism transforms an existing supervising learning algorithm from one that finds a unique one-to-one mapping into an algorithm that finds one of the many one-to-many mappings. Since the objective of learning is relaxed by this transformation, the number of learning epochs can be reduced significantly. We show in this paper that mixed-mode learning can be applied in the well-known cascade correlation learning algorithm to reduce the number of hidden units required for convergence when applied to classification problems whose desired outputs are binary. Our experimental results confirm this reduction, although they show that more learning time is required than that of cascade correlation. In general, reducing the number of hidden units at the expense of additional learning time is usually desirable.

CO95
Kumar N. Ganapathy and Benjamin Wah, ``Optimizing General Design Objectives in Processor-Array Design,'' Proc. International Parallel Processing Symposium (April 1994), IEEE Computer Society, pp. 295-302.

In this paper we present an improved search procedure for the General Parameter Method (GPM) [GanWah92a, GanWah92b]. Our procedure maps uniform dependence algorithms to application-specific processor arrays (PAs). It can optimize general design objectives with certain non-monotonicity properties, i.e., those that do not increase monotonically with the parameters. An example of such an objective is the minimization of the total completion time, including load and drain times. In contrast, earlier design methods can only deal with monotonic objectives. We present results for the matrix-product problem using this search technique; application of the technique on the transitive-closure problem is presented elsewhere [ForWahShaGan93]. We also show that the parameters in GPM can be expressed in terms of the schedule vector Pi and allocation matrix S in the popular dependence-based method (DM), thereby allowing GPM to be used in DM for finding optimal designs for uniform dependence algorithms.

CO96
Chin-Chi Teng and Benjamin W. Wah, ``An Automated Design System for finding the Minimal Configuration of a Feed-Forward Neural Network,'' Proc. Int'l Conf. on Neural Networks (June 1994), IEEE, vol. 3, pp. 1295-1300.

In this paper, we present a method for finding the minimal configuration of a feed-forward artificial neural network (ANN) for solving a given application problem. We assume that the cascade-correlation (CAS) training algorithm is used to train the weights of the ANNs concerned. Under a given time constraint that is long enough to train tens of ANNs completely, we divide the time into quanta, and present a method for scheduling dynamically the ANN to be trained in each quantum from a pool of partially trained ANNs. Our goal is to find an ANN configuration with smaller number of hidden units as compared to the alternative of applying the CAS algorithm repeatedly to train each ANN to completion before exploring new ANNs. Our system is a population-based generate-and-test method that maintains a population of candidate ANNs, and that selectively train those that are predicted to require smaller configurations. Since it is difficult to predict the exact number of hidden units required when the CAS algorithm terminates, our system compares two partially trained ANNs and predicts which one will converge with a smaller number of hidden units relative to the other. Our prediction mechanism is based on a comparator neural network (CANN) that takes as inputs the TSSE-versus-time behavior of training performed already on two ANNs, and that predicts which one will require a smaller number of hidden units when convergence is reached. We show that our CANN can predict correctly most of the time, and present experimental results on better configurations found in a given time limit for a classification problem and the two-spiral problem.

CO97
Benjamin W. Wah, ``Issues in Strategy Learning (Plenary Address),'' Proc. 1994 Symposium on Intelligent Systems in Communications and Power (Feb. 21-23, 1994), pp. 238-244.

In this paper, we discuss twelve application-specific issues in strategy learning. These issues are important when designing strategy learning systems as well as in learning new application-specific strategies. In each issue we discuss the problems involved and identified some effective solutions.

CO98
Yao-Jen Chang and Benjamin W. Wah, ``Polynomial Programming Using Groebner Bases,'' Proc. IEEE Int'l Conference on Computer Software and Applications (Nov. 1994), pp. 236-241.

Finding the global optimal solution for a general nonlinear program is a difficult task except for very small problems. In this paper we identify a class of nonlinear programming problems called polynomial programming problems (PP). A polynomial program is an optimization problem with a scalar polynomial objective function and a set of polynomial constraints. By using Groebner Bases, we can determine the global minimum of a polynomial program in a reasonable amount of time and memory.
CO99
Benjamin W. Wah and Yi Shang, ``A Comparative Study of IDA*-Style Searches,'' Proc. 6th IEEE Int'l Conference on Tools with Artificial Intelligence (Nov. 1994), pp. 290-296.

In this paper, we study the performance of various IDA*-style searches and investigate methods to improve their performance by predicting in each stage the threshold to use for pruning. We first present three models to approximate the distribution of number of search nodes by lower bounds: exponential, geometric, and linear, and illustrate these distributions based on some well-known combinatorial search problems. We then show the performance of an ideal IDA* algorithm and identify reasons why existing IDA*-style algorithms perform well. In practice, we will be able to know from previous experience the distribution for a given problem instance but will not be able to determine the parameters of the distribution. Hence, we develop RIDA*, a method that estimates dynamically the parameters of the distribution, and predicts the best threshold to use. Finally, we compare the performance of several IDA*-style algorithms --- Korf's IDA*, RIDA*, IDA*-CR and DFS* --- on several application problems, and identify conditions under which each of these algorithms will perform well.
C100
Benjamin W. Wah, Pankaj Mehra, and Chin-Chi Teng, ``Comparator Neural Network for Dynamic Prediction,'' Proc. 2nd Int'l Symposium on Neural Networks (Cheng Kung University, Tainan, Taiwan, Dec. 1994), pp. 571-580.

In this paper we present the design of a comparator neural network (CANN) and its associated learning algorithms. This network is useful for comparing two time series in order to predict which one will have a better future value based on a user-specified objective function. We illustrate its application in distributed load balancing and in an automated design system for neural networks. In the first application, we use a CANN as a workload predictor that takes as input a time window of measurable values of CPU occupancy, network traffic, disk traffic, and memory occupancy from two computers, and predicts which one of the two computers will have a shorter turnaround time for executing an incoming job. In the second application, we use a CANN to predict which one of two partially trained neural networks will have a shorter convergence time, given current TSSE traces of these networks. In each application, experimental results show that prediction is improved using the proposed CANNs.

C101
Yao-Jen Chang and Benjamin W. Wah, ``Lagrangian Techniques for Solving a Class of Zero-One Integer Linear Programs,'' Proc. IEEE Int'l Conference on Computer Software and Applications (Aug. 1995), pp. 156-161.

We consider a class of zero-one integer programming feasibility problems (0-1 ILPF problems) in which the coefficients of variables can be integers, and the objective is to find an assignment of binary variables so that all constraints are satisfied. We propose a Lagrangian formulation in the continuous space and develop a gradient search in this space. By using two counteracting forces, one performing gradient search in the primal space (of the original variables) and the other in the dual space (of the Lagrangian variables), we show that our search algorithm does not get trapped in local minima and reaches equilibrium only when a feasible assignment to the original problem is found. We present experimental results comparing our method with backtracking and local search (based on random restarts). Our results show that 0-1 ILPF problems of reasonable sizes can be solved by an order of magnitude faster than existing methods.

C102
Arthur Ieumwananonthachai and Benjamin W. Wah, ``Statistical Generalization Of Performance-Related Heuristics for Knowledge-Lean Applications,'' Proc. IEEE Int'l Conf. on Tools with Artificial Intelligence (Nov. 1995), pp. 174-181.

In this paper, we present new results on the automated generalization of performance-related heuristics learned for knowledge-lean applications. We study methods to statistically generalize new heuristics learned for some small subsets of a problem space (using methods such as genetics-based learning) to unlearned problem subdomains. Our method uses a new statistical metric called probability of win. By assessing the performance of heuristics in a range-independent and distribution-independent manner, we can compare heuristics across problem subdomains in a consistent manner. To illustrate our approach, we show experimental results on generalizing heuristics learned for sequential circuit testing, VLSI cell placement and routing, and branch-and-bound search. We show that generalization can lead to new and robust heuristics that perform better than the original heuristics across problem instances of different characteristics.

C103
Benjamin W. Wah, A. Ieumwananonthachai, Shu Yao, and Ting Yu, ``Statistical Generalization: Theory and Applications,'' Proc. Int'l Conf. on Computer Design (Oct. 1995), pp. 4-10.

In this paper, we discuss a new approach to generalize heuristic methods (HMs) to new test cases of an application, and conditions under which such generalization is possible. Generalization is difficult when performance values of HMs are characterized by multiple statistical distributions across subsets of test cases of an application. We define a new measure called probability of win and propose three methods to evaluate it: interval analysis, maximum likelihood estimate, and Bayesian analysis. We show experimental results on new HMs found for blind equalization and branch-and-bound search.

C104
Yi Shang and Benjamin W. Wah, ``A Global Optimization Method for Neural Network Training,'' Proc. 1996 IEEE Int'l Conf. on Neural Networks (June 1996), Plenary, Panel and Special Sessions, pp. 7-11.

In this paper, we present a new supervised learning method called Novel (Nonlinear Optimization Via External Lead) for training feed-forward neural networks. In general, such learning can be considered as a nonlinear global optimization problem in which the goal is to minimize the nonlinear error function that spans the space of weights. Novel is a trajectory-based nonlinear optimization method that combines global and local searches to find good local minima. It relies on an external force to pull a search out of a local minimum in its global search and employs local descents to locate local minima in its local search. The result is an efficient search method that identifies good basins without spending a lot of time in them. We have shown improved training and testing results for five neural-network benchmark problems as compared to those of existing minimization and neural-network learning algorithms. For the two-spiral problem, Novel has found a design using only four hidden units and 25 weights. (The best known design requires nine hidden units and 75 weights.) In short, Novel represents a significant advance in the state-of-the-art in supervised learning of neural networks and general optimization of continuous non- linear functions.

C105
Benjamin W. Wah and Yi Shang, ``A Discrete Lagrangian-Based Global-Search Method for Solving Satisfiability Problems,'' Satisfiability Problem: Theory and Applications, Ding-Zhu Du, Jun Gu, and Panos Pardalos (eds.), DIMACS Series in Discrete Mathematics and Theoretical Computer Science, American Mathematical Society, 1997, pp. 365-392.

Satisfiability is a class of NP-complete problems that model a wide range of real-world applications. These problems are difficult to solve because they have many local minima in their search space, often trapping greedy search methods that utilize some form of descent. Further, gradient changes in the search space are in integers, leading to possibly many directions of the same gradient. In this paper, we propose a new discrete Lagrange-multiplier-based global-search method for solving satisfiability problems. We derive new approaches for applying Lagrangian methods in discrete space, show that equilibrium is reached when a feasible assignment to the original problem is found, and present heuristic algorithms to look for equilibrium points. Instead of restarting from a new starting point when a search reaches a local trap, the Lagrangian variables in our method provide a force to lead the search out of a local minimum and move it in the direction provided by the Lagrangian variables. One of the major advantages of our method is that it has very few algorithmic parameters to be tuned by users, and the search procedure can be made deterministic and the results, reproducible. We demonstrate our method by applying it to solve an extensive set of benchmark problems archived in DIMACS of Rutgers University. Our method generally performs better than the best existing methods and can achieve an order-of-magnitude speedup for some problems. Moreover, our method can solve some new benchmark problems that cannot be solved by other local-search methods.

C106
Chien-Wei Li and Benjamin W. Wah, ``Optimal Bit-Level Processor Arrays for Matrix Multiplication,'' Proc. 11th Int'l Conf. on Systems Engineering (July 1996), pp. 596-601.

Uniform recurrent algorithms belong to a fundamental class of operations used in almost every scientific and engineering computation. An important example of these algorithms is matrix multiplication. In this paper, we present optimal designs of bit-level processor arrays for matrix multiplication, and analyze trade-offs between bit-level and word-level designs in terms of execution time and chip area. We formulate the design as an optimization problem and search for optimal designs using the General Parameter Method developed by Ganapathy and Wah. We show that previous bit-level processor arrays for matrix multiplication are two special cases of our more general designs, and prove that one of the previous designs is optimal in execution time. Finally, we show that the fastest bit-level design is only O(log p) faster than than the fastest word-level design, instead of O(p) as claimed in the previous work of Shang and Wah, where p is the number of bits per word.

C107
Jun Gu, Paul W. Purdom, John Franco, and Benjamin W. Wah, ``Algorithms for the Satisfiability (SAT) Problem: A Survey,'' Satisfiability Problem: Theory and Applications, Ding-Zhu Du, Jun Gu, and Panos Pardalos (eds.), DIMACS Series in Discrete Mathematics and Theoretical Computer Science, American Mathematical Society, 1997, pp. 19-152.

The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided design, computer-aided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the satisfiability problems.

C108
Benjamin W. Wah, Yi Shang, Tao Wang, and Ting Yu, ``Global Optimization Design of QMF Filter Banks,'' Proc. IEEE Midwest Symposium on Circuits and Systems (Aug. 1996), Iowa City, Iowa., vol. 2, pp. 640-643.

In this paper, we present a new global optimization method for designing QMF (quadrature mirror filters) filter banks. We formulate the design problem as a nonlinear constrained optimization problem, using the reconstruction error as the objective, and the stopband ripple, stopband energy, passband ripple, passband energy and transition bandwidth as constraints. This formulation allows us to search for solutions that improves with respect to the objective, and that performs better than or equal to the best existing designs with respect to the constrained measures. We present Novel, a global optimization method we have developed for solving nonlinear continuous constrained optimization problems, and apply it to find improved designs. We also show that relaxing the constraints on transition bandwidth and stopband energy will lead to significant improvements in the other performance measures.

C109
Tao Wang and Benjamin W. Wah, ``Handling Inequality Constraints in Continuous Nonlinear Global Optimization,'' Proc. Society for Design and Process Science Conference (Austin, Texas), December 1996, pp. 267-274.

In this paper, we present a new method to handle inequality constraints and apply it in Novel (Nonlinear Optimization via External Lead), a system we have developed for solving constrained continuous nonlinear optimization problems. In general, in applying Lagrange-multiplier methods to solve these problems, inequality constraints are first converted into equivalent equality constraints. One such conversion method adds a slack variable to each inequality constraint in order to convert it into an equality constraint. The disadvantage of this conversion is that when the search is inside a feasible region, some satisfied constraints may still pose a nonzero weight in the Lagrangian function, leading to possible oscillations and divergence when a local optimum lies on the boundary of a feasible region. We propose a new conversion method called the MaxQ method such that all satisfied constraints in a feasible region always carry zero weight in the Lagrange function; hence, minimizing the Lagrange function in a feasible region always leads to local minima of the objective function. We demonstrate that oscillations do not happen in our method. We also propose methods to speed up convergence when a local optimum lies on the boundary of a feasible region. Finally, we show improved experimental results in applying our proposed method in Novel on some existing benchmark problems and compare them to those obtained by applying the method based on slack variables.

C110
Benjamin W. Wah, Yi Shang, Tao Wang, and Ting Yu, ``QMF Filter Bank Design by a New Global Optimization Method,'' Proc. IEEE Int'l Conf. on Accoustics, Speech and Signal Processing, vol. 3, pp. 2081-2084, April 1997.

In this paper, we study various global optimization methods for designing QMF (quadrature mirror filter) filter banks. We formulate the design problem as a nonlinear constrained optimization problem, using the reconstruction error as the objective, and other performance metrics as constraints. This formulation allows us to search for designs that improve over the best existing designs. We present Novel, a global optimization method for solving nonlinear continuous constrained optimization problems. We show that Novel finds better designs with respect to simulated annealing and genetic algorithms in solving QMF benchmark design problems. We also show that relaxing the constraints on transition bandwidth and stopband energy leads to significant improvements in the other performance measures.

C111
Benjamin W. Wah and Yi Shang, ``Discrete Lagrangian-Based Search for Solving MAX-SAT Problems,'' Proc. 15th Int'l Joint Conf. on Artificial Intelligence (IJCAI), August 1997, pp. 378-383.

Weighted maximum satisfiability problems (MAX-SAT) are difficult to solve due to the large number of local minima in their search space. In this paper we propose a new discrete Lagrangian based search method (DLM) for solving these problems. Instead of restarting from a new point when the search reaches a local minimum, the Lagrange multipliers in DLM provide a force to lead the search out of the local minimum and move it in a direction provided by the multipliers. Since DLM has very few parameters to be tuned, it can be made deterministic and the results, reproducible. We compare DLM with GRASP in solving a large set of test problems, and show that it finds better solutions and is substantially faster. DLM has a solid theoretical foundation that can be used as a systematic approach for solving general discrete optimization problems.

C112
Benjamin W. Wah, Yi Shang, and Zhe Wu, ``Discrete Lagrangian Method for Optimizing the Design of Multiplierless QMF Filter Banks,'' Proc. 1997 IEEE International Conference on Application-Specific Systems, Architectures and Processors (July 1997), pp. 529-538.

In this paper, we present a new discrete Lagrangian optimization method for designing multiplierless QMF (quadrature mirror filter) filter banks. In multiplierless QMF filter banks, filter coefficients are powers-of-two (PO2) where numbers are represented as sums or differences of powers of two (also called Canonical Signed Digit--CSD--representation), and multiplications can be carried out as additions, subtractions and shifting. We formulate the design problem as a nonlinear discrete constrained optimization problem, using the reconstruction error as the objective, and other performance metrics as constraints. One of the major advantages of this formulation is that it allows us to search for designs that improve over the best existing designs with respect to all performance metrics, rather than finding designs that trade one performance metric for another. We show that our design method can find designs that improve over Johnston's benchmark designs using a maximum of three to six ONE bits in each filter coefficient.

C113
Benjamin W. Wah, Tao Wang, Yi Shang, and Zhe Wu, ``Improving the Performance of Weighted Lagrange-Multiplier Methods for Nonlinear Constrained Optimization,'' Proc. IEEE International Conference on Tools with Artificial Intelligence (November 1997), pp. 224-231.

Nonlinear constrained optimization problems can be solved by a Lagrange-multiplier method in a continuous space or by its extended discrete version in a discrete space. These methods rely on gradient descents in the objective space to find high-quality solutions, and gradient ascents in the Lagrangian space to satisfy the constraints. The balance between descents and ascents depends on the relative weights between the objective and the constraints that indirectly control the convergence speed and solution quality of the method. To improve convergence speed without degrading solution quality, we propose an algorithm to dynamically control these relative weights. Starting from an initial weight, the algorithm automatically adjusts the weights based on the behavior of the search progress. With this strategy, we are able to eliminate divergence, reduce oscillations, and speed up convergence. We show improved convergence behavior of our proposed algorithm on both nonlinear continuous and discrete problems.

C114
Tao Wang and Benjamin W. Wah ``Efficient and Adaptive Lagrange-Multiplier Methods for Continuous Nonlinear Optimization,'' Proc. ACM Symposium on Applied Computing February 1998, pp. 361-365.

In this paper, we address three important issues in applying Lagrangian methods to solve optimization problems with inequality constraints. First, we propose a MaxQ method that transforms inequality constraints to equality constraints. It overcomes divergence and oscillations that occur in the slack-variable method. Some strategies to speed up its convergence are also examined. Second, we develop a method to monitor the balance between descents in the original-variable space and ascents in the Lagrange-multiplier space in Lagrangian methods. During the search, we adjust this balance adaptively in order to improve convergence speed. Third, we introduce a nonlinear traveling trace to pull a search trajectory out of a local equilibrium point in a continuous fashion without restarting the search and without losing information already obtained in the local search. This strategy extends existing Lagrangian methods from a local search of equilibrium points to a global search. We implement these three strategies in Novel (Nonlinear Optimization via External Lead) and apply it to find improved solutions for a collection of benchmark problems.

C115
Tao Wang and Benjamin W. Wah, ``Performance Measures and Lagrange Multiplier Methods to Two-Band PR-LP Filter Bank Design,'' Proc. IEEE Int'l Conf. on Accoustics, Speech and Signal Processing, vol. 3, May 1998, pp. 1461-1464.

In this paper, we study performance measures for designing two-band perfect reconstruction (PR) linear phase (LP) filter bank. Based on these performance metrics, we formulate the design problem as a nonlinear constrained optimization problem, where some metrics such as stopband energy have closed form, but the others like transition width do not. Our formulation allows us to search for designs that improve over the existing designs. More important, given user-specified performance bounds such as maximal transition width, we are able to design filter banks if solutions exist, and trade-off among different performance metrics can be easily achieved. Finally, many experimental results show feasibility and efficiency of our filter bank design method.

C116
Benjamin W. Wah and Xiao Su ``An Efficient Multiaccess Protocol for Wireless Networks,'' Proc. International Symposium on Internet Technology, Taipei, Taiwan, April 1998, pp. 173-178.

In this paper, we propose and evaluate an efficient multiaccess protocol for cell-based wireless networks. Our protocol addresses the problems in existing random-access protocols for wireless networks: long-term fairness as well as short-term fairness in accessing a shared channel and the detection of hidden and exposed collisions. Our proposed protocol is a limited contention protocol in which the set of contending mobiles are chosen based on a global common contention window maintained by every mobile station. The contention window is adjusted based on three possible channel states: no transmission, success, and collision. We assume that the channel state at the end of each contention slot is broadcast by a base station in a control channel. We show analytically that the time interval between two successive accesses to the channel by any station is geometrically distributed, and that each station has equal chance to access the channel in every contention period. This is significantly better than existing random-access protocols based on the binary exponential backoff algorithm, which results in large variances in inter-access delays. Our experimental results also show that the number of contention slots to resolve collisions is constant on the average, independent of the number of contending stations.

C117
Yi Shang and Benjamin W. Wah ``A New Global-Search Method for Designing Filter Banks,'' Parallel and Distributed Methods for Image Processing II, SPIE Int'l Symposium on Optical Science, Engineering and Instrumentation, SPIE, July 1998, pp. 94-105.

In this paper, we present a new global-search method for designing QMF (quadrature-mirror-filter) filter banks. We formulate the design problem as a nonlinear constrained optimization problem, using the reconstruction error as the objective, and the other performance metrics as constraints. This formulation allows us to search for designs that improve over the best existing designs. Due to the nonlinear nature of the performance metrics, the design problem is a nonlinear constrained optimization problem with many local minima. We propose to solve this design problem use global-search methods based on Lagrangian formulations. After transforming the original constrained optimization problem into an unconstrained form using Lagrange multipliers, we apply a new global-search method to find good solutions. The method consists of a coarse-level global-search phase, a fine-level global-search phase, and a local search phase, and is suitable for parallel computation due to the minimal dependency between various key components. In our experiments, we show that our method finds better designs than existing global-search methods, including simulated annealing and genetic algorithms.

C118
Tao Wang and Benjamin W. Wah, ``Constrained Optimization of Filter Banks in Subband Image Coding,'' roc. Workshop on Multimedia Signal Processing, IEEE Signal Processing Society, Dec. 1998, pp. 432-437.

The design of filter banks in subband image coding is critical for achieving high image quality. In this paper, we study the design from both the signal processing domain and the theory of wavelets. We formulate the design of filter banks as a two-stage nonlinear constrained optimization problem, each of which is solved by sequential quadratic programming (SQP). Using a wavelet image coding prototype, we show improved quality of the designed filter banks in terms of image-compression and peak signal-to-noise ratios (PSNRs).

C119
Yi Shang and Benjamin W. Wah ``Improving the Performance of Discrete Lagrange-Multiplier Search for Solving Hard SAT Problem,'' Proc. 10th International Conference on Tools with Artificial Intelligence,, IEEE, Nov. 1998, pp. 176-183.

Recently, we have proposed the discrete Lagrange-multiplier method (DLM) to solve satisfiability problems. Instead of restarting from a new starting point when the search reaches a local minimum in the objective space, the Lagrange multipliers of violated constraints in DLM provide a force to lead the search out of the local minimum and move it in a direction provided by the multipliers. In this paper, we present the theoretical foundation of DLM for solving SAT problems and discuss some implementation issues. We study the performance of DLM on a set of hard satisfiability benchmark instances, and show the importance of dynamic scaling of Lagrange multipliers and the flat-move strategy. We show that DLM can perform better than competing local-search methods when its parameters are selected properly.

C120
Benjamin W. Wah and Dong Lin, ``Transformation-based Reconstruction for Audio Transmissions over the Internet,'' Proc. 17th IEEE Symposium on Reliable Distributed Systems, IEEE, Oct. 1998, pp. 211-217.

This paper studies the design of data transformation algorithms for audio data transmitted over the Internet, with a goal of reconstructing the original signals by the receiver with little distortions in the presence of bursty loss of packets. It assumes that a single audio stream is interleaved into multiple packets, and a lost sample at the receiver is reconstructed as the interpolation of adjacent samples received. We propose a non-redundant transformation-based reconstruction algorithm that can minimize the reconstruction error for any fixed, interpolation-based reconstruction algorithm. Its basic idea is that the sender transforms the input audio stream optimally, based on the reconstruction method used at the receiver before sending the data packets. Consequently, the receiver is able to recover much better from losses of packets than without any knowledge of what the signals should be. In particular, we study our transformation algorithm based on one popular linear interpolation-based reconstruction algorithm. We found that our scheme can improve the signal-to-noise ratio (SNR) by 1 to 2 dB with very little extra computation efforts as compared to the scheme without transformation.

C121
Benjamin W. Wah and Xiao Su, ``Streaming Video with Transformation-Based Error Concealment and Reconstruction,'' Proc. IEEE Int'l Conf. on Multimedia Computing and Systems, IEEE, June 1999, vol. 1, pp. 238-243.

Real-time video streaming over the Internet requires robust delivery mechanisms with low overhead. Traditional error control schemes are not attractive because they either add redundant information that may worsen network traffic, or rely solely on the inadequate capability of the decoder to do error concealment. As sophisticated concealment techniques cannot be employed in a real-time software playback scheme, we propose in this paper a simple yet efficient transformation-based error concealment algorithm. The algorithm applies a linear transformation to the original video signals, with the objective of minimizing the mean squared error if missing information were restored by simple averaging at the destination. We also describe two strategies to cope with error propagations in temporal differentially coded frames. Experimental results show that our proposed transformation-based reconstruction algorithm performs well in real Internet tests.

C122
Zhe Wu and Benjamin Wah, ``Trap Escaping Strategies in Discrete Lagrangian Methods for Solving Hard Satisfiability and Maximum Satisfiability Problems,'' Proc. National Conf. on Artificial Intelligence, AAAI, July 1999, pp. 673-678.

In this paper, we present efficient trap-escaping strategies in a search based on discrete Lagrange multipliers to solve difficult SAT problems. Although a basic discrete Lagrangian method (DLM) can solve most of the satisfiable DIMACS SAT benchmarks efficiently, a few of the large benchmarks have eluded solutions by any local-search methods today. These difficult benchmarks generally have many traps that attract local-search trajectories. To this end, we identify the existence of traps when any change to a variable will cause the resulting Lagrangian value to increase. Using the hanoi4 and par16-1 benchmarks, we illustrate that some unsatisfied clauses are trapped more often than others. Since it is too difficult to remember explicitly all the traps encountered, we propose to remember these traps implicitly by giving larger increases to Lagrange multipliers of unsatisfied clauses that are trapped more often. We illustrate the merit of this new update strategy by solving some of most difficult but satisfiable SAT benchmarks in the DIMACS archive (hanoi4, hanoi4-simple, par16-1 to par16-5, f2000, and par32-1-c to par32-3-c). Finally, we apply the same algorithm to improve on the solutions of some benchmark MAX-SAT problems that we solved before.

C123
Benjamin Wah and Tao Wang, ``Simulated Annealing with Asymptotic Convergence for Nonlinear Constrained Global Optimization,'' Proc. Principles and Practice of Constraint Programming, Springer-Verlag, October 1999, pp. 461-475.

In this paper, we present constrained simulated annealing (CSA), a global minimization algorithm that converges to constrained global minima with probability one, for solving nonlinear discrete non-convex constrained minimization problems. The algorithm is based on the necessary and sufficient condition for constrained local minima in the theory of discrete Lagrange multipliers we developed earlier. The condition states that the set of discrete saddle points is the same as the set of constrained local minima when all constraint functions are non-negative. To find the discrete saddle point with the minimum objective value, we model the search by a finite inhomogeneous Markov chain that carries out (in an annealing fashion) both probabilistic descents of the discrete Lagrangian function in the original-variable space and probabilistic ascents in the Lagrange-multiplier space. We then prove the asymptotic convergence of the algorithm to constrained global minima with probability one. Finally, we extend CSA to solve nonlinear constrained problems with continuous variables and those with mixed (both discrete and continuous) variables. Our results on a set of nonlinear benchmarks are much better than those reported by others. By achieving asymptotic convergence, CSA is one of the major developments in nonlinear constrained global optimization today.

C124
Benjamin Wah and Zhe Wu, ``The Theory of Discrete Lagrange Multipliers for Nonlinear Discrete Optimization,'' Proc. Principles and Practice of Constraint Programming, Springer-Verlag, October 1999, pp. 28-42.

In this paper we present a Lagrange-multiplier formulation of discrete constrained optimization problems, the associated discrete-space first-order necessary and sufficient conditions for saddle points, and an efficient first-order search procedure that looks for saddle points in discrete space. Our new theory provides a strong mathematical foundation for solving general nonlinear discrete optimization problems. Specifically, we propose a new vector-based definition of descent directions in discrete space and show that the new definition does not obey the rules of calculus in continuous space. Starting from the concept of saddle points and using only vector calculus, we then prove the discrete-space first-order necessary and sufficient conditions for saddle points. Using well-defined transformations on the constraint functions, we further prove that the set of discrete-space saddle points is the same as the set of constrained local minima, leading to the first-order necessary and sufficient conditions for constrained local minima. Based on the first-order conditions, we propose a local-search method to look for saddle points that satisfy the first-order conditions.

C125
Benjamin Wah and Tao Wang, ``Constrained Simulated Annealing with Applications in Nonlinear Continuous Constrained Global Optimization,'' Proc. 11th IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 1999, pp. 381-388.

This paper improves constrained simulated annealing (CSA), a discrete global minimization algorithm with asymptotic convergence to discrete constrained global minima with probability one. The algorithm is based on the necessary and sufficient conditions for discrete constrained local minima in the theory of discrete Lagrange multipliers. We extend CSA to solve nonlinear continuous constrained optimization problems whose variables take continuous values. We evaluate many heuristics, such as dynamic neighborhoods, gradual resolution of nonlinear equality constraints and reannealing, in order to greatly improve the efficiency of solving continuous problems. We report much better solutions than the best-known solutions in the literature on two sets of continuous optimization benchmarks.

C126
Zhe Wu and Benjamin Wah, ``Solving Hard Satisfiability Problems: A Unified Algorithm Based On Discrete Lagrange Multipliers,'' Proc. 11th IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 1999, pp. 210-217.

In this paper, we present improved strategies in DLM-99-SAT to escape from traps and to select proper parameter sets to use when applied to solve some difficult but satisfiable SAT problems. One of the main issues in DLM-99-SAT is that it has a large number of tune-able parameters, making it difficult to determine the best parameters to use when given a new instance. To this end, we propose a set of performance metrics and a set of rules using these metrics that determine at run time the best parameters to use for a specific instance. Our experimental results show that these rules are able to select the most suitable parameter set for each instance with very little overhead. Finally, we verify the performance of DLM-99-SAT by solving some benchmarks in SATLIB and some of the most difficult but satisfiable DIMACS SAT problems, including par32-1-c to par32-4-c and hanoi4-simple.

C127
Benjamin Wah and Ming-lun Qian ``Data Sampling Using Bayesian Analysis and its Applications in Simulated Annealing,'' Proc. of Fifth Int'l Conf. on Computer Science and Informatics, vol. 1, Feb. 2000, pp. 643-646.

In this paper, we propose a new probabilistic sampling procedure and its application in simulated annealing (SA). The new procedure uses Bayesian analysis to evaluate samples made already and draws the next sample based on a density function constructed through Bayesian analysis. After integrating our procedure in SA, we apply it to solve a set of optimization benchmarks. Our results show that our proposed procedure, when used in SA, is very effective in generating high-quality samples that are more reliable and robust in leading to global solutions.

C128
Benjamin Wah and Ming-lun Qian ``Constrained Formulations for Neural Network Training and Their Applications to Solve the Two-Spiral Problem,'' Proc. of Fifth Int'l Conf. on Computer Science and Informatics, vol. 1, Feb. 2000, pp. 598-601.

In this paper, we formulate neural-network training as a constrained optimization problem instead of the traditional formulation based on unconstrained optimization. We show that constraints violated during a search provide additional force to help escape from local minima using our newly developed constrained simulated annealing (CSA) algorithm. We demonstrate the merits of our approach by training neural networks to solve the two-spiral problem. To enhance the search, we have developed a strategy to adjust the gain factor of the activation function. We show converged training results for networks with 4, 5, and 6 hidden units, respectively. Our work is the first successful attempt to solve the two-spiral problem with 19 weights.

C129
Benjamin W. Wah, Dong Lin and Xiao Su ``Streaming Real-Time Audio and Video Data with Transformation-Based Error Concealment and Reconstruction,'' Proc. First International Conference on Web Information Systems Engineering, IEEE, June 2000, pp. 2-11.

One fundamental problem with streaming audio and video data over unreliable IP networks is that packets may be dropped or arrive too late for playback. Traditional error control schemes are not attractive because they either add redundant information that may worsen network traffic, or rely solely on the inadequate capability of the decoder to do error concealment. In this paper, we propose a simple yet efficient transformation-based algorithm in order to conceal network losses in streaming real-time audio and video data over the Internet. In the receiver side, we adopt a simple reconstruction algorithm based on interpolation, as sophisticated concealment techniques cannot be employed in software-based real-time playback. In the sender side, we design a linear transformation with the objective of minimizing the mean squared error, assuming that some of the descriptions are lost and that the missing information is reconstructed by simple averaging at the destination. We further integrate the transformations in case of video streaming in the discrete cosine transform (DCT) to produce an optimized reconstruction-based DCT. Experimental results show that our proposed algorithm performs well in real Internet tests.

C130
Zhe Wu and Benjamin W. Wah ``An Efficient Global-Search Strategy in Discrete Lagrangian Methods for Solving Hard Satisfiability Problems,'' Proc. National Conf. on Artificial Intelligence, AAAI, July 2000, pp. 310-315.

In this paper, we present an efficient global-search strategy in an algorithm based on the theory of discrete Lagrange multipliers for solving difficult SAT instances. These difficult benchmarks generally have many traps and basins that attract local-search trajectories. In contrast to trap-escaping strategies proposed earlier that only focus on traps, we propose a global-search strategy that penalizes a search for visiting points close to points visited before in the trajectory, where penalties are computed based on the Hamming distances between the current and historical points in the trajectory. The new strategy specializes to the earlier trap-escaping strategies because when a trajectory is inside a trap, its historical information will contain many points in close vicinity to each other. It is, however, more general than trap escaping because it tries to avoid visiting the same region repeatedly even when the trajectory is not inside a trap. By comparing our results to existing results in the area, we conclude that our proposed strategy is both effective and general.

C131
Xiao Su and Benjamin W. Wah ``Streaming Video with Optimized Reconstruction-Based DCT,'' Proc. IEEE Int'l Conf. on Multimedia and Expo, July-Aug. 2000, pp. 536-539.

One fundamental problem with streaming video data over unreliable IP networks is that packets may be dropped or arrive too late for real-time playback. Traditional error-control schemes are not attractive because they either add redundant information that may worsen network traffic, or rely solely on decoders with inadequate error concealment. This paper presents a joint sender-receiver approach in designing transforms for multiple-description coding in order to conceal network losses in streaming real-time video over the Internet. In the receiver side, we adopt a simple interpolation-based reconstruction, as sophisticated concealment techniques cannot be employed in software-based real-time playback. In the sender side, we design an optimized reconstruction-based discrete cosine transform (ORB-DCT) with the objective of minimizing the mean squared error, assuming that some of the descriptions are lost and that the missing information is reconstructed by simple averaging at the destination. Experimental results show that our proposed ORB-DCT performs better than the original DCT in real Internet tests. Future research includes finding perceptual-based quantization matrix based on extended basis images derived for reconstruction, and incorporating the effects of quantization and inverse quantization in the design.

C132
Benjamin W. Wah and Ming-lin Qian ``Time-Series Predictions Using Constrained Formulations for Neural-Network Training and Cross Validation,'' Proc. Int'l Conf. on Intelligent Information Processing, IFIP World Computer Congress, Kluwer Academic Press, Aug. 2000, pp. 220-226.

In this paper, we formulate the training of artificial neural networks for time-series prediction as a constrained optimization problem, instead of the traditional formulation as an unconstrained optimization. A constrained formulation is better because violated constraints can provide additional guidance during a search than the traditional sum-of-squared errors in an unconstrained formulation. Using a constrained formulation, we propose to add constraints on validation errors in order to monitor the prediction quality during training of an ANN. We then solve the constrained problem using our newly developed constrained simulated annealing algorithm (CSA). Experimental results on the sunspot and laser time series show that constrained formulations on training and cross-validation, when solved by CSA, lead to better prediction quality and less number of weights than previous designs.

C133
Benjamin W. Wah and Yi-Xin Chen ``Optimal Anytime Constrained Simulated Annealing for Constrained Global Optimization,'' Proc. Principles and Practice of Constraint Programming, Springer-Verlag, Sept. 2000, pp. 425-439.

In this paper we propose an optimal anytime version of constrained simulated annealing (CSA) for solving constrained nonlinear programming problems (NLPs). One of the goals of the algorithm is to generate feasible solutions of certain prescribed quality using an average time of the same order of magnitude as that spent by the original CSA with an optimal cooling schedule in generating a solution of similar quality. Here, an optimal cooling schedule is one that leads to the shortest average total number of probes when the original CSA with the optimal schedule is run multiple times until it finds a solution. Our second goal is to design an anytime version of CSA that generates gradually improving feasible solutions as more time is spent, eventually finding a constrained global minimum (CGM). In our study, we have observed a monotonically non-decreasing function relating the success probability of obtaining a solution and the average completion time of CSA, and an exponential function relating the objective target that CSA is looking for and the average completion time. Based on these observations, we have designed CSA_{AT-ID}, the anytime CSA with iterative deepening that schedules multiple runs of CSA using a set of increasing cooling schedules and a set of improving objective targets. We then prove the optimality of our schedules and demonstrate experimentally the results on four continuous constrained NLPs. CSA_{AT-ID} can be generalized to solving discrete, continuous, and mixed-integer NLPs, since CSA is applicable to solve problems in these three classes. Our approach can also be generalized to other stochastic search algorithms, such as genetic algorithms, and be used to determine the optimal time for each run of such algorithms.

C134
Benjamin W. Wah and Yi-Xin Chen, ``Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization,'' Proc. 12th IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 2000, pp. 286-293.

This paper presents a problem-independent framework that unifies various mechanisms for solving discrete constrained nonlinear programming (NLP) problems whose functions are not necessarily differentiable and continuous. The framework is based on the first-order necessary and sufficient conditions in the theory of discrete constrained optimization using Lagrange multipliers. It implements the search for discrete-neighborhood saddle points by performing ascents in the original-variable subspace and descents in the Lagrange-multiplier subspace. Our study on the various mechanisms shows that CSAGA, a combined constrained simulated annealing and genetic algorithm, performs well. Finally, we apply iterative deepening to determine the optimal number of generations in CSAGA.

C135
Benjamin W. Wah, Xiao Su and Dong Lin, ``A Survey of Error-Concealment Schemes for Real-Time Audio and Video Transmissions over the Internet,'' Proc. IEEE Int'l Symposium on Multimedia Software Engineering, Dec. 2000, pp. 17-24.

Real-time audio and video data streamed over unreliable IP networks, such as the Internet, may encounter losses due to dropped packets or late arrivals. This paper reviews error-concealment schemes developed for streaming real-time audio and video data over the Internet. Based on their interactions with (video or audio) source coders, we classify existing techniques into source coder-independent schemes that treat underlying source coders as black boxes, and source coder-dependent schemes that exploit coder-specific characteristics to perform reconstruction. Last, we identify possible future research directions.

C136
Benjamin W. Wah and Yi-Xin Chen ``Hybrid Constrained Simulated Annealing and Genetic Algorithms for Nonlinear Constrained Optimization,'' Proc. IEEE Congress on Evolutionary Computation, May 2001, pp. 925-932.

This paper presents a framework that unifies various search mechanisms for solving constrained nonlinear programming (NLP) problems. These problems are characterized by functions that are not necessarily differentiable and continuous. Our proposed framework is based on the first-order necessary and sufficient condition for constrained local minimization in discrete space that shows the equivalence between discrete-neighborhood saddle points and constrained local minima. To look for discrete-neighborhood saddle points, we formulate a discrete constrained NLP in an augmented Lagrangian function and study various mechanisms for performing ascents of the augmented function in the original-variable subspace and descents in the Lagrange- multiplier subspace. Our results show that CSAGA, a combined constrained simulated annealing (SA) and genetic algorithm (GA), performs well. Finally, we apply iterative deepening to determine the optimal number of generations in CSAGA and show that performance is robust with respect to changes in population size.

C137
Benjamin W. Wah and Ming-lun Qian ``Violation-Guided Learning for Constrained Formulations in Neural-Network Time-Series Predictions,'' Proc. Int'l Joint Conf. on Artificial Intelligence, Aug. 2001, pp. 771-776.

Time-series predictions by artificial neural networks (ANNs) are traditionally formulated as unconstrained optimization problems. As an unconstrained formulation provides little guidance on search directions when a search gets stuck in a poor local minimum, we have proposed recently to use a constrained formulation in order to use constraint violations to provide additional guidance. In this paper, we formulate ANN learning with cross-validations for time-series predictions as a non-differentiable nonlinear constrained optimization problem. Based on our theory of Lagrange multipliers for discrete constrained optimization, we propose an efficient learning algorithm, called violation guided back-propagation (VGBP), that computes an approximate gradient using back-propagation (BP), that introduces annealing to avoid blind acceptance of trial points, and that applies a relax-and-tighten (R&T) strategy to achieve faster convergence. Extensive experimental results on well-known benchmarks, when compared to previous work, show one to two orders-of-magnitude improvement in prediction quality, while using less weights.

C138
Benjamin W. Wah and Xiao Su ``Coding and Transmission of Subband Coded Images in the Internet (Keynote Address),'' Proc. SPIE Multispectral Image Processing and Pattern Recognition, Image Compression and Encryption Technologies vol. 4551, Oct. 2001, pp. 1-10.

Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

C139
Benjamin W. Wah and Ming-lun Qian, ``Constrained Formulations and Algorithms for Stock-Price Predictions Using Recurrent FIR Neural Networks,'' Proc. National Conference on Artificial Intelligence, Aug. 2002, pp. 211-216.

In this paper, we develop new constrained artificial-neural-network (ANN) formulations and learning algorithms to predict future stock prices, a difficult time-series prediction problem. Specifically, we characterize stock prices as a non-stationary noisy time series, identify its predictable low-frequency components, develop strategies to predict missing low-frequency information in the lag period of a filtered time series, model the prediction problem by a recurrent FIR ANN, formulate the training problem of the ANN as a constrained optimization problem, develop new constraints to incorporate the objectives in cross validation, solve the learning problem using algorithms based on the theory of Lagrange multipliers for nonlinear discrete constrained optimization, and illustrate our prediction results on three stock time series. There are two main contributions of this paper. First, we present a new approach to predict missing low-pass data in the lag period when low-pass filtering is applied on a time series. Such predictions allow learning to be carried out more accurately. Second, we propose new constraints on cross validation that can improve significantly the accuracy of learning in a constrained formulation. Our experimental results demonstrate good prediction accuracy in a 10-day horizon.

C140
Benjamin W. Wah and Xiao Su, ``Loss concealments of subband coded images for real-time transmissions in the Internet,'' Proc. Int'l Conf. on Multimedia and Expo, IEEE, Aug. 2002, pp. 449-452.

Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Due to the facts that TCP employs congestion control and retransmissions and that UDP and TCP packets are treated differently by routers, TCP takes much longer delays than those of UDP to deliver an image, but packet losses in UDP may lead to poor decoding quality if the image is single-description coded (SDC) and the losses cannot be concealed. In this paper, we study the use of UDP to deliver multi-description coded (MDC) reconstruction-based subband-transformed (ST) images and the reconstruction of missing information at a receiver based on information received. To facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in MDC. We then carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

C141
Dong Lin and Benjamin W. Wah, ``LSP-Based Multiple-Description Coding for Real-Time Low Bit-Rate Voice Transmissions,'' Proc. IEEE Int'l Conf. on Multimedia and Expo, Aug. 2002, pp. 597-600.

A fundamental issue in real-time interactive voice transmissions over unreliable IP networks is the loss or late arrival of packets for playback. Such losses cannot be recovered by retransmissions due to tight time constraints in interactive applications. This problem is especially serious in transmitting low bit-rate coded speech when pervasive dependencies are introduced in a bit stream, leading to the loss of subsequent dependent frames when a single packet is lost or arrives late. In this paper, we propose a novel LSP-based multiple-description coding method that adapts its number of descriptions to network loss conditions in order to conceal packet losses in transmitting low-bit-rate coded speech over lossy packet networks. Based on high correlations observed in linear predictor parameters, in the form of Line Spectral Pairs (LSPs), of adjacent frames, we generate multiple descriptions in a sender by interleaving LSPs, and reconstruct lost LSPs in a receiver by linear interpolations. Without increasing the transmission bandwidth, our scheme represents a trade-off between the quality of received packets and the ability to reconstruct lost packets. Our experimental results on FS CELP show good performance.

C142
Yixin Chen, Guozhu Dong. Jiawei Han, Benjamin W. Wah, and Jianyong Wang ``Multi-Dimensional Regression Analysis of Time-Series Data Streams,'' Proc. Very Large Data Bases, Dec. 2002, pp. 323-334.

Real-time production systems and other dynamic environments often generate tremendous (potentially infinite) amount of stream data; the volume of data is too huge to be stored on disks or scanned multiple times. Can we perform on-line, multi-dimensional analysis and data mining of such data to alert people about dramatic changes of situations and to initiate timely, high-quality responses? This is a challenging task.

In this paper, we investigate methods for online, multi-dimensional regression analysis of time-series stream data, with the following contributions: (1) our analysis shows that only a small number of compressed regression measures instead of the complete stream of data need to be registered for multi-dimensional linear regression analysis, (2) to facilitate online stream data analysis, a partially materialized data cube model, with regression as measure, and a tilt time frame as its time dimension, is proposed to minimize the amount of data to be retained in memory or stored on disks, and (3) an exception-guided drilling approach is developed for on-line, multi-dimensional exception-based regression analysis. Based on this design, algorithms are proposed for efficient analysis of time-series data streams. Our performance study compares the proposed algorithms and identifies the most memory- and time-efficient one for stream data analysis.

C143
Yixin Chen, Guozhu Dong. Jiawei Han, Benjamin W. Wah, and Jianyong Wang ``OLAPing Stream Data: Is it Feasible?'' Proc. ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, 2002, pp. 53-58.

Real-time surveillance systems and other dynamic envoronments often generate tremendous (potentially infinite) volume of stream data: the volume is too huge to be scanned multiple times. However, much of such data resides at rather low level of abstraction, whereas most analysts are interested in dynamic changes (such as trends and outliers) at relatively high levels of abstraction. To discover such high level characteristics, one may need to perform online multilevel analysis of stream data, similar to OLAP (online analytical processing) of relational and data warehouse data.

With limited storage space and the demand for fast response, is it realistic to promote online, multi-dimensional analysis and mining of stream data to alert people about dramatic changes of situations at multiple levels of abstraction?

In this paper, we present an architecture, called stream-cube, which, based on our analysis, is feasible for successful online multi-dimensional, multi-level analysis of stream data. By successfu, we mean that the system will provide analytical power and flexibility, derive timely and quality responses, and consume limited memory space and other resources.

The general design of the stream-cube architecture is described as follows. First, a tilt time frame model is taken as the default model for time dimension. Such a model reduces the amount of data to be retained in memory or stored on disks but still achieves flexibility and analysis power. Second, a small number of critical layers are maintained for flexible analysis. Consider that the stream data resides at the primitive layer. It is desirable to identify two critical higher layers in applications: the minimal interest layer, and the observation layer. These two layers can be used as the basis for stream cube construction and for multi-level online analysis. Third, instead of computing a complete data cube based on the notion of critical layers, one may compute only the layers along a popular path and leave others for query-driven, online computation. Our analysis shows that this architecture provides a good time-space trade-off: it decreases susbstantially the demand for memory usage and reduces the overall response time. Other possible designs for OLAPing stream data are also discussed in the paper.

C144
Yixin Chen and Benjamin W. Wah, ``Calculus of Variations in Discrete Space for Constrained Nonlinear Dynamic Optimization,'' Proc. 14th IEEE International Conference on Tools with Artificial Intelligence, Nov. 2002, pp. 67-74.

In this paper, we propose new dominance relations that can speed up significantly the solution process of nonlinear constrained dynamic optimization problems in discrete time and space. We first show that path dominance in dynamic programming cannot be applied when there are general constraints that span across multiple stages, and that node dominance, in the form of Euler-Lagrange conditions developed in optimal control theory in continuous space, cannot be extended to that in discrete space. This paper is the first to propose efficient dominance relations, in the form of local saddle-point conditions in each stage of a problem, for pruning states that will not lead to locally optimal paths. By utilizing these dominance relations, we develop efficient search algorithms whose complexity, despite exponential, has a much smaller base as compared to that without using the relations. Finally, we demonstrate the performance of our algorithms on some spacecraft planning and scheduling benchmarks and show significant improvements in CPU time and solution quality as compared to those obtained by the existing ASPEN planner.

C145
Yixin Chen and Benjamin W. Wah, ``Automated Planning and Scheduling using Calculus of Variations in Discrete Space,'' Proc. Int'l Conf. on Automated Planning and Scheduling, June 2003, pp. 2-11.

In this paper, we propose new dominance relations that can speed up significantly the solution process of planning problems formulated as nonlinear constrained dynamic optimization in discrete time and space. We first show that path dominance in dynamic programming cannot be applied when there are general constraints that span across multiple stages, and that node dominance, in the form of Euler-Lagrange conditions developed in optimal control theory in continuous space, cannot be extended to that in discrete space. This paper is the first to propose efficient node-dominance relations, in the form of local saddle-point conditions in each stage of a discrete-space planning problem, for pruning states that will not lead to locally optimal paths. By utilizing these dominance relations, we present efficient search algorithms whose complexity, despite exponential, has a much smaller base as compared to that without using the relations. Finally, we demonstrate the performance of our approach by integrating it in the ASPEN planner and show significant improvements in CPU time and solution quality on some spacecraft scheduling and planning benchmarks.

C146
Hang Yu and Benjamin W. Wah, ``Frequency-Based Reconstruction of Multi-Description Coded JPEG2000 Images,'' Proc. IEEE Int'l Conf. on Computer Networks and Mobile Computing, Oct. 2003, pp. 92-99.

Multiple description coding (MDC), together with reconstruction, is a popular technique for concealing packets losses in a best-effort network like the Internet. In our previous study, we have developed a sample-based MDC scheme for subband images and an optimal reconstruction-based subband transform (ORB-ST) at senders that takes into account the reconstruction process at receivers. Due to the partitioning of a description into segments before coding the segments by the ORB-ST coder, our previous scheme has severe degradation in quality because, among other reasons, the coder cannot exploit redundancies across multiple segments in a description, especially at low bit rates. To address this issue, we propose in this paper to divide a description into subbands in the frequency domain, instead of segmenting a description in the sample domain. Based on the observation that the corresponding subbands across two descriptions after inverse transformation are correlated, we present a scheme that reconstructs a lost subband using the corresponding subband received in the other description. We show that our scheme is more effective than reconstruction in the full band; that it has similar quality and lower delays in low-loss scenarios and slightly degraded quality in high-loss scenarios, when compared to MDC, unsegmented, and JPEG2000-coded images sent by TCP; and that it has much better quality and similar delays when compared to MDC, segmented, and ORB-ST-coded images sent by UDP.

C147
Dong Xin Jiawei Han Xiaolei Li Benjamin W. Wah, ``Star-Cubing: Computing Iceberg Cubes by Top-Down and Bottom-Up Integration,'' Proc. Very Large Data Bases, Sept. 2003, pp. 112-123.

Data cube computation is one of the most essential but expensive operations in data warehousing. Previous studies have developed two major approaches, top-down vs. bottom-up. The former, represented by the Multi-Way Array Cube (called MultiWay) algorithm, aggregates simultaneously on multiple dimensions; however, it cannot take advantage of Apriori pruning when computing iceberg cubes (cubes that contain only aggregate cells whose measure value satisfies a threshold, called iceberg condition). The latter, represented by two algorithms: BUC and H-Cubing, computes the iceberg cube bottom-up and facilitates Apriori pruning. BUC explores fast sorting and partitioning techniques; whereas H-Cubing explores a data structure, H-Tree, for shared computation. However, none of them fully explores multi-dimensional simultaneous aggregation. In this paper, we present a new method, Star-Cubing, that integrates the strengths of the previous three algorithms and performs aggregations on multiple dimensions simultaneously. It utilizes a star-tree structure, extends the simultaneous aggregation methods, and enables the pruning of the group-by's that do not satisfy the iceberg condition. Our performance study shows that Star-Cubing is highly e±cient and outperforms all the previous methods in almost all kinds of data distributions.

C148
Benjamin W. Wah, Yixin Chen, and Robert Morris, ``Discrete-Space Lagrangian Optimization for Multi-Objective Temporal Planning,'' Fourth Int'l Workshop on Distributed Constraint Reasoning, Aug. 2003.

In this paper, we study multi-objective temporal planning problems in discrete time and space formulated as single-objective dynamic optimization problems with a minimax objective function. We propose efficient node-dominance relations for pruning states that will not lead to locally optimal plans. Based on the theory of Lagrange multipliers in discrete space, we present the necessary and sufficient conditions for locally optimal plans, partition the Lagrangian function into distributed Lagrangian functions, one for each stage, and show the distributed necessary conditions in the form of local saddle-point conditions in each stage. By utilizing these dominance relations, we present efficient search algorithms whose complexity, despite exponential, has a much smaller base as compared to that without using the relations, and that can converge asymptotically to Pareto optimal plans. Finally, we demonstrate the performance of our approach by integrating it in the ASPEN planner and show significant improvements in CPU time and solution quality on some spacecraft scheduling and planning benchmarks.

C149
Benjamin W. Wah and Yixin Chen, ``Partitioning of Temporal Planning Problems in Mixed Space using the Theory of Extended Saddle Points,'' Proc. IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 2003, pp. 266-273.

We study the partitioning of temporal planning problems formulated as mixed-integer nonlinear programming problems, develop methods to reduce the search space of partitioned subproblems, and propose algorithms for resolving unsatisfied global constraints. The algorithms are based on the necessary and sufficient extended saddle-point condition for constrained local minimization developed in this paper. When compared with the MIPS planner in solving some PDDL2.1 planning problems, our distributed implementation of MIPS shows significant improvements in time and quality.

C150
Yixin Chen, Chih-Wei Hsu, and Benjamin W. Wah, ``SGPlan: Subgoal Partitioning and Resolution in Planning,'' Proc. Fourth Int'l Planning Competition, Int'l Conf. on Automated Planning and Scheduling, Whistler, Canada, June 5, 2004.

We have developed SGPlan, a planner that competes in the Fourth International Planning Competition. SGPlan partitions a large planning problem into subproblems, each with its own subgoal, and resolves inconsistent solutions of subgoals using our extended saddle-point condition. Subgoal partitioning is effective because each partitioned subproblem involves a substantially smaller search space than that of the original problem. We have developed methods for the detection of reasonable orders among subgoals, an intermediate goal-agenda analysis to hierarchically decompose each subproblem, a search-space-reduction algorithm to eliminate irrelevant actions in subproblems, and a strategy to call the best planner to solve each bottom-level subproblem. Currently, SGPlan supports PDDL2.1 and derived predicates, and algorithms for supporting time initiated facts and ADL are under development.

C151
Benjamin W. Wah and Dong Xin, ``Optimization of Bounds in Temporal Flexible Planning with Dynamic Controllability,'' Proc. IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 2004, pp. 40-48.

A temporal flexible planning problem can be formulated as a simple temporal network with uncertainty (STNU), whose links are classified as contingent and requirement links. The problem of constraint satisfaction for STNU has been characterized as controllability, where dynamic controllability is the most interesting and useful controllability property. In this paper, we study the assignment of bounds allowed on the requirement links in order for the resulting STNU to be dynamically controllable and the total cost over the allowed ranges of the requirement links to be minimized. Since the problem with a linear cost function is NP-hard, we formulate the dynamic controllability of an STNU with a general cost function as constraints in a nonlinear optimization problem. Our approach is flexible because it can incorporate additional constraints, such as resource constraints, in the formulation. Finally, we present methods to reduce the number of constraints in order to make the problem tractable.

C152
Yixin Chen, Chih Wei Hsu, and B. W. Wah, ``System Demonstration: Subgoal Partitioning and Resolution in SGPlan,'' System Demonstration Session, Int'l Conf. on Automated Planning and Scheduling, AAAI, June 2005, pp. 32-35.

In this demo, we illustrate the idea and operations of SGPlan, a PDDL2.2 planner that won the first prize in the suboptimal temporal metric track and a second prize in the suboptimal propositional track in the Fourth International Planning Competition (IPC4), 2004. SGPlan is the only planner that won in two tracks of the competition. Since SGPlan is a suboptimal planner, it did not participate in the optimal track. The demo uses AIRPORT, a PDDL2.2 domain from IPC4, as a running example throughout the presentation.

C153
Batu Sat and Benjamin W. Wah, ``Speech-Adaptive Layered G.729 Coder for Loss Concealments of Real-Time Voice over IP,'' Proc. IEEE Int'l Conf. on Multimedia and Expo, July 2005.p>
In this paper, we propose a speech-adaptive layered-coding (LC) scheme for the loss concealments of real-time CELP-coded speech transmitted over IP networks. Based on the ITU G.729 CS-ACELP codec operating at 8 Kbps, we design a loss-robust speech-adaptive codec at the same bit rate. Our scheme employs LC with redundant packetization in order to conceal losses and adapt to dynamic loss conditions characterized by the loss rate and the degree of burstiness, while maintaining an acceptable end-to-end delay. By protecting only the most important excitation parameters of each frame according to its speech type, our approach enables more efficient use of the bit budget. Our scheme delivers good-quality speech with a level of protection similar to full replication under medium loss rates, provides speech quality similar to the standard G.729 under very low loss rates, and outperforms both for low-to-medium loss rates.

C154
Benjamin W. Wah and Yixin Chen, ``Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning,'' Principles and Practice of Constraint Programming, Springer-Verlag, Oct. 2005, pp. 697-711.

In this paper, we present a constraint-partitioning approach for finding local optimal solutions of large-scale mixed-integer nonlinear programming problems (MINLPs). Based on our observation that MINLPs in many engineering applications have highly structured constraints, we propose to partition these MINLPs by their constraints into subproblems, solve each subproblem by an existing solver, and resolve those violated global constraints across the subproblems using our theory of extended saddle points. Constraint partitioning allows many MINLPs that cannot be solved by existing solvers to be solvable because it leads to easier subproblems that are significant relaxations of the original problem. The success of our approach relies on our ability to resolve violated global constraints efficiently, without requiring exhaustive enumerations of variable values in these constraints. We have developed an algorithm for automatically partitioning a large MINLP in order to minimize the number of global constraints, an iterative method for determining the optimal number of partitions in order to minimize the search time, and an efficient strategy for resolving violated global constraints. Our experimental results demonstrate significant improvements over the best existing solvers in terms of solution time and quality in solving a collection of mixed-integer and continuous nonlinear constrained optimization benchmarks.

C155
Dong Lin, Benjamin W. Wah, and Hang Yu, ``Perceptual Weighting in LSP-Based Multi-Description Coding for Real-Time Low-Bit-Rate Voice over IP,'' IEEE Workshop on Multimedia Signal Processing, Oct. 2005.

This paper focuses on improving an LSP-based multi-description coding (MDC) scheme for concealing losses when real-time CELP-coded speech data is sent over IP networks. In that scheme, LSP vectors are interleaved because they are highly correlated and can be reconstructed by interpolations when packets are delayed or lost. However, excitation vectors are random in nature and must be replicated in order to conceal losses. To avoid increasing the bit rate of the encoded speech, the excitation vectors are generated from longer subframes. This leads to lower quality of the decoded speech because noise from outside formant regions is over-emphasized, and more excitation information is extracted there. To improve quality, we propose in this paper to modify the perceptual-weighting filter (PWF) in the coder in order to adjust the allocation of noise inside and outside formant regions. We further study a method that selects the PWF in such a way that maintains high quality of the LSP-based MDC across different voice streams and loss scenarios. Experimental results on FS-1016 CELP (4800 bps), ITU G.723.1 ACELP (5300 bps), and ITU G.723.1 MP-MLQ (6300 bps) demonstrate noticeable improvements in decoding quality.

C156
Batu Sat and Benjamin W. Wah, ``Speech- and Network-Adaptive Layered G.729 Coder for Loss Concealments of Real-Time Voice over IP,'' IEEE Workshop on Multimedia Signal Processing, Oct. 2005.

In this paper, we propose a layered CELP speech coding (LC) scheme that adapts dynamically to the characteristics of the speech encoded and the network loss conditions in real time transmissions of voice over IP. Based on the ITU G.729 CS-ACELP codec operating at 8 Kbps, we design a variable bit-rate codec that is robust to losses and delays in IP networks. To cope with bursty losses while maintaining an acceptable end-to-end delay, our scheme employs LC with redundant piggybacking of perceptually important parameters in the base layer, with a degree of redundancy adjusted according to feedbacks from receivers. Under various delay constraints, we study trade-offs between the additional bit rate required for redundant piggybacking and the protection of perceptually important parameters. Experimental results show that our scheme works well and has quality comparable to full replication.

C157
Chih-Wei Hsu, Benjamin W. Wah, and Yixin Chen, ``Subgoal Ordering and Granularity Control for Incremental Planning,'' Proc. IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 2005, pp. 507-514.

In this paper, we study strategies in incremental planning for ordering and grouping subproblems partitioned by the subgoals of a planning problem when each subproblem is solved by a basic planner. To generate a rich set of partial orders for ordering subproblems, we propose a new ordering algorithm based on a relaxed plan built from the initial state to the goal state. The new algorithm considers both the initial and the goal states and can effectively order subgoals in such a way that greatly reduces the number of invalidations during incremental planning. We have also considered trade-offs between the granularity of the subgoal sets and the complexity of solving the overall planning problem. We show an optimal region of grain size that minimizes the total complexity of incremental planning. We propose an efficient strategy to dynamically adjust the grain size in partitioning in order to operate in this optimal region. We further evaluate a redundant-execution scheme that uses two different subgoal orders in order to improve the quality of the plans generated without greatly sacrificing run-time efficiency. Experimental results on using three basic planners (Metric-FF, YAHSP, and LPG-TD-speed) show that our strategies are general for improving the time and quality of each of these planners across various benchmarks.

C158
Batu Sat and Benjamin W. Wah, ``Analysis and Evaluation of the Skype and Google-Talk VoIP Systems,'' Proc. IEEE Int'l Conf. on Multimedia and Expo, July 2006.

In this paper, we study Skype and Google Talk, two widely used VoIP systems, and compare their perceptual speech quality with that of our proposed system using UDP packet traces collected in the PlanetLab. Based on methods for speech coding, packetization, jitter control, estimation and feedback of network conditions, and loss concealments, our results show that Skype has noticeable quality degradations because it only uses a maximum of two-way redundancy for loss concealment, does not handle out-of-order arrivals, and applies a fixed jitter control of 60 ms relative to the expected arrival time. Its slow loss-adaptation time of more than one minute to change from one-way to two-way redundancy makes it susceptible to quality degradation under fast changing loss conditions. In contrast, Google Talk does not employ any loss adaptation and performs similar to Skype under low- to medium-loss scenarios. By addressing the shortcomings of Skype and Google Talk, we demonstrate improvements in speech quality in our proposed prototype.

C159
Chih-Wei Hsu and Benjamin W. Wah and Ruoyun Huang and Yixin Chen, ``Handling Soft Constraints and Goals Preferences in SGPlan,'' Proc. ICAPS Workshop on Preferences and Soft Constraints in Planning, June 2006.

In this paper, we present the partition-and-resolve strategy in SGPlan (hereafter called SGPlan5) for fully supporting all language features in PDDL3.0. Based on the architecture of SGPlan that supported PDDL2.2 (hereafter called SGPlan4), SGPlan5 partitions a large planning problem into subproblems, each with its own subgoal, and resolves those inconsistent solutions using our extended saddle-point condition. Subgoal partitioning is effective for solving large planning problems because each partitioned subproblem involves a substantially smaller search space than that of the original problem. In SGPlan5, we generalize subgoal partitioning so that the goal state of a subproblem is no longer one goal fact as in SGPlan4, but can be any fact with loosely coupled constraints with other subproblems. We have further developed methods for representing a planning problem in a multi-valued form and for carrying out partitioning in the transformed space. The multi-valued representation leads to more efficient heuristics for resolving trajectory and temporal constraints and goal preferences.

C160
Chih-Wei Hsu and Benjamin W. Wah and Ruoyun Huang and Yixin Chen, ``New Features in SGPlan for Handling Preferences and Constraints in PDDL3.0,'' Proc. Fifth Int'l Planning Competition, Int'l Conf. on Automated Planning and Scheduling, June 2006.

In this paper, we describe our enhancements incorporated in SGPlan (hereafter called SGPlan5 for supporting the new features of the PDDL3.0 language used in the Fifth International Planning Competition (IPC5). Based on the architecture of SGPlan that competed in the Fourth IPC (hereafter called SGPlan4), SGPlan5 partitions a large planning problem into subproblems, each with its own subgoal, and resolves those inconsistent solutions using our extended saddle-point condition. Subgoal partitioning is effective for solving large planning problems because each partitioned subproblem involves a substantially smaller search space than that of the original problem. In SGPlan5, we generalize subgoal partitioning so that the goal state of a subproblem is no longer one goal fact as in SGPlan4, but can be any fact with loosely coupled constraints with other subproblems. We have further developed methods for representing a planning problem in a multi-valued form in order to accommodate the new features in PDDL3.0, and for carrying out partitioning in the transformed space. The multi-valued representation leads to more effective heuristics for resolving goal preferences and trajectory and temporal constraints.

C161
Benjamin W. Wah and Yixin Chen and Andrew Wan, `` Constrained Global Optimization by Constraint Partitioning and Simulated Annealing,'' Proc. IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 2006, pp. 265-272.

In this paper, we present constraint-partitioned simulated annealing (CPSA), an algorithm that extends our previous constrained simulated annealing (CSA) for constrained optimization. The algorithm is based on the theory of extended saddle points (ESPs). By decomposing the ESP condition into multiple necessary conditions, CPSA partitions a problem by its constraints into subproblems, solves each independently using CSA, and resolves those violated global constraints across the subproblems. Because each subproblem is exponentially simpler and the number of global constraints is very small, the complexity of solving the original problem is significantly reduced. We state without proof the asymptotic convergence of CPSA with probability one to a constrained global minimum in discrete space. Last, we evaluate CPSA on some continuous constrained benchmarks.

C162
Chih-wei Hsu, Benjamin W. Wah, Ruoyun Huang, and Yixin Chen, ``Constraint Partitioning for Solving Planning Problems with Trajectory Constraints and Goal Preferences'', Proc. Int'l Joint Conf. on Artificial Intelligence, Jan. 2007, pp. 1924-1929.

The PDDL3 specifications include soft goals and trajectory constraints for distinguishing high-quality plans among the many feasible plans in a solution space. To reduce the complexity of solving a large PDDL3 planning problem, constraint partitioning can be used to decompose its constraints into subproblems of much lower complexity. However, constraint locality due to soft goals and trajectory constraints cannot be effectively exploited by existing subgoal-partitioning techniques developed for solving PDDL2.2 problems. In this paper, we present an improved partition-and-resolve strategy for supporting the new features in PDDL3. We evaluate techniques for resolving violated global constraints, optimizing goal preferences, and achieving subgoals in a multi-valued representation. Empirical results on the 5th International Planning Competition (IPC5) benchmarks show that our approach is effective and significantly outperforms other competing planners.

C163
Batu Sat and Benjamin W. Wah ``Playout Scheduling and Loss-Concealments in VoIP for Optimizing Conversational Voice Communication Quality,'' Proc. ACM Multimedia, Sept. 2007, pp. 137-146.

In this paper, we present new adaptive playout scheduling (POS) and loss-concealment (LC) schemes for delivering high and consistent conversational voice communication quality (CVCQ) perceived by users in real-time VoIP systems. We first characterize the delay and loss conditions of an IP network and a human conversation in a VoIP system. We then identify the attributes that affect the human perception of CVCQ, which include listening-only speech quality (LOSQ), conversational interactivity (CI), and conversational efficiency (CE). We investigate their trade-offs with respect to system-controllable mouth-to-ear delays (MEDs) and the amount of redundant piggybacking. Finally, we evaluate our adaptive POS and redundancy-based LC schemes by packet traces collected in the PlanetLab.

C164
Batu Sat and Benjamin W. Wah ``Evaluation of Conversational Voice Communication Quality of the Skype, Google-Talk, Windows Live, and Yahoo Messenger VoIP Systems,'' IEEE Workshop on Multimedia Signal Processing, Oct. 2007 (accepted to appear).

In this paper, we evaluate the conversational voice communication quality (CVCQ) of VoIP systems, both from the user and the system perspectives. We first identify the metrics for CVCQ, which include listening-only speech quality (LOSQ), conversational interactivity (CI), and conversational efficiency (CE). These depend on the mouth-to-ear delays (MEDs) between the two clients. Based on packet traces collected in the PlanetLab and on the dynamics of human interactive speech, we study four popular VoIP client systems: Skype (v2.5), Google-Talk (Beta), Windows Live Messenger (v8.0), and Yahoo Messenger with Voice (v8.0), under various network and conversational conditions.

C165
Benjamin W. Wah and Soomin Lee ``Hypergraph Partitioning for Exploiting Localities in Nonlinear Constrained Optimization,'' Proc. IEEE Int'l Conf. on Tools with Artificial Intelligence, Oct. 2007 (accepted to appear).

In this paper, we present a new hypergraph partitioning algorithm that jointly optimizes the number of hyperedge cuts and the number of shared vertices in nonlinear constrained optimization problems. By exploiting the localities of constraints with respect to their variables, we propose to partition the constraints into subproblems. We use a relaxed global search to solve the subproblems and resolve those violated global constraints across the subproblems by updating the corresponding penalties. As resolving violated global constraints is computationally expensive, we propose to reduce the number of global constraints by increasing the number of shared variables. This trade-off is advantageous because the number of global constraints in many benchmarks can be significantly reduced by having a small number of variables shared across the subproblems. Partitioning in this context can be achieved by partitioning the corresponding hypergraph. We improve hMETIS to jointly optimize the number of hyperedge cuts and the number of shared vertices. Our experimental results demonstrate improved solution quality with similar computational overhead on some VLSI cell placement benchmarks.

C166
Batu Sat, Zixia Huang, and Benjamin W. Wah ``The Design of a Multi-Party VoIP Conferencing System over the Internet,'' Proc. IEEE Int'l Symposium on Multimedia, Dec. 2007, pp. 3-10.

In this paper, we present the design of a VoIP conferencing system that enables the voice communication of multiple users in the Internet. After studying the conversational dynamics in multi-party conferencing, we identify user-observable metrics that affect the perception of conversational quality and their trade-offs. Based on the dynamics and the behavior on delays, jitters, and losses of Internet traces collected in the PlanetLab, we design the transmission topology and schemes for loss concealments and play-out scheduling. Last, we compare the performance of our system and Skype (Version 3.5.0.214) using repeatable experiments that simulate human participants and network conditions in a multi-party conferencing scenario.

C167
Zixia Huang, Batu Sat, and Benjamin W. Wah, ``Automated Learning of Playout Scheduling Algorithms For Improving Perceptual Conversational Quality in Multi-Party VoIP,'' Proc. IEEE Int'l Conf. on Multimedia and Expo, July 2008, pp. 493-496.

In this paper, we propose four methods for equalizing the silence periods experienced by users in a multi-party VoIP conversation in order to improve their perceived conversational quality. To mitigate the unbalanced silence periods caused by delay disparities in Internet connections, the playout scheduler at the receiver of each client equalizes the silence periods experienced. Our limited subjective tests show that we can improve the perceptual quality when the network connections are lossy and have large delay disparities. Because it is impossible to conduct subjective tests under all possible conditions, we have developed a classifier that learns to select the best equalization algorithm using learning examples derived from subjective tests under limited network and conversational conditions. Our experimental results show that our classifier can consistently pick the best algorithm with the highest subjective conversational quality under unseen conditions, and that our system has better perceptual quality when compared to that of Skype (Version 3.6.0.244).

C168
Chih-Wei Hsu and Benjamin W. Wah, ``The SGPlan Planning System in IPC-6,'' Proc. Sixth Int'l Planning Competition, Int'l Conf. on Automated Planning and Scheduling, Sept. 2008 (accepted to appear).

In this paper, we describe the the planning system SGPlan6 participating in the satisficing tracks of the deterministic part of the Sixth Planning Competition (IPC6). SGPlan6 is designed to solve both temporal and non-temporal planning problems specified in PDDL3 which may have soft goals, derived predicates or ADL features. It inherited the parallel decomposition framework from SGPlan5 which consists of the partitioning procedure, the resolution algorithm, and the subproblem solver. New enhancements are added in SGPlan6 to address poor localities and to further exploit multi-valued domain structure. We identify domain-specific features that lead to the use of techniques for overcoming different scenarios of problem decomposition.

C169
Soomin Lee and Benjamin W. Wah, ``Finding Good Starting Points For Solving Nonlinear Constrained Optimization Problems By Parallel Decomposition,'' Proc. 7th Mexican Int'l Conf. on Artificial Intelligence, A. Gelbukh and E.F. Morales (ed.), vol. LNAI 5317, Springer-Verlag Berlin Heidelberg, Oct. 2008, pp. 65-76.
In this paper, we develop heuristics for finding good starting points when solving large-scale nonlinear constrained optimization problems (COPs) formulated as nonlinear programming (NLP) and mixed-integer NLP (MINLP). By exploiting the localities of constraints, we first partition each problem by parallel decomposition into subproblems that are related by complicating constraints and complicating variables. We develop heuristics for finding good starting points that are critical for resolving the complicating constraints and variables. In our experimental evaluations of 255 benchmarks, our approach can solve 89.4% of the problems, whereas the best existing solvers can only solve 42.8%.

C170
Soomin Lee and Benjamin W. Wah, ``Finding Good Starting Points For Solving Structured and Unstructured Nonlinear Constrained Optimization Problems,'' Proc. 20th IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 2008, vol. 1, pp. 469-476.
In this paper, we develop heuristics for finding good starting points when solving large-scale nonlinear constrained optimization problems (COPs). We focus on nonlinear programming (NLP) and mixed-integer NLP (MINLP) problems with nonlinear non-convex objective and constraint functions. By exploiting the highly structured constraints in these problems, we first solve one or more simplified versions of the original COP, before generalizing the solutions found by interpolation or extrapolation to a good starting point. In our experimental evaluations of 190 NLP (resp., 52 MINLP) benchmark problems, our approach can solve 97.9% (resp., 71.2%) of the problems using significantly less iterations from our proposed starting points, as compared to 85.3% (resp., 46.2%) of the problems solvable by the best existing solvers from their default starting points.

C171
Batu Sat and Benjamin W. Wah, ``Statistical Testing Of Off-line Comparative Subjective Evaluations For Optimizing Perceptual Conversational Quality In VoIP,'' Proc. IEEE Int'l Symposium on Multimedia, Dec. 2008, pp. 424-431.
In this paper, we study the scheduling of off-line subjective tests for evaluating alternative parameter values of control schemes in real-time multimedia applications. These applications are characterized by multiple counteracting objective quality metrics (such as delay and signal quality) that can be affected by the control schemes. However, the trade-offs among these metrics with respect to the subjective preferences of users are not defined. As a result, it is difficult to use the proper control value that leads to the best subjective quality without carrying out subjective tests. Since subjective tests are expensive to conduct and the number of possible control values and run-time conditions is prohibitively large, it is important that a minimum number of such tests be conducted, and that the results learned can be generalized to unseen conditions with statistical confidence. To this end, we study in this paper optimal algorithms for scheduling a sequence of subjective tests, while leaving the generalization of limited off-line subjective tests to guide the operations of the control schemes at run time to a future paper. Using an example application in the design of the playout scheduling (POS) algorithm for a two-party VoIP system, we study the accuracy and efficiency of conducting subjective tests simultaneously.

C172
Wee Hong Yeo, Batu Sat and Benjamin W. Wah, ``New Piggybacking Algorithm In VoIP Using Enhanced G.722.2 Codec With Larger Frames,'' Proc. IEEE Int'l Workshop on Multimedia Signal Processing (MMSP'09), Oct. 2009, pp. 1-4.
In this paper, we present the design of a new piggybacking algorithm for VoIP implemented using the G.722.2 codec. In piggybacking, multiple speech frames that include those transmitted in the past are encapsulated in a single packet. Because redundant copies of each frame are transmitted to the receiver, the receiver can recover those lost frames when one or more packets are lost or arrive late in their transmission. In this paper, we have enhanced the G.722.2 codec by removing further redundancies in the multiple frames encapsulated in piggybacking and by having only one set of LP coefficients for all the subframes encapsulated. We create multiple versions of the codec, each using a different frame size. Our new codec can encode the multiple frames with little degradation in PESQ, while having substantial bit savings. Its performance is evaluated against the original method of piggybacking over random losses, as well as that using packet traces collected in the PlanetLab.

C173
Jiangbo Lu and Benjamin W. Wah, ``Scheduling Transmissions of Real-Time Video Coded Frames in Video Conferencing Applications over the Internet,'' Workshop on Networking Issues in Multimedia Entertainment (NIME), Proc. IEEE Int'l Conf. on Multimedia and Expo (ICME), pp.1695-1700, 19-23 July 2010.

This paper presents a novel mechanism for scheduling packet transmissions of real-time video conferencing over the public Internet. By scheduling transmissions of video packets at different priorities in the presence of packet losses and delays, we aim to deliver videos with perceptually pleasing and consistent quality at receivers. Fundamental to this new mechanism is a realization of uneven packet transmission rates (UPTR) in the Internet, which allow an instantaneous packet transmission rate to fluctuate around an average rate, without significantly affecting the average loss rate and delay in transmissions. By exploiting this generic UPTR idea, we propose two application scenarios that provide positive design alternatives for robust realtime video delivery over the lossy Internet. Experimental results clearly show the advantages of our proposed methods over those previous solutions without awareness of UPTR.

C174
E. Wah, Y. Mei and B. W. Wah ``Portfolio Optimization through Data Conditioning and Aggregation,'' Proc. 23th IEEE Int'l Conf. on Tools with Artificial Intelligence, Nov. 2011, pp. 253-260.
In this paper, we present a novel portfolio optimization method that aims to generalize the delta changes of future returns, based on historical delta changes of returns learned in a past window of time. Our method addresses two issues in portfolio optimization. First, we observe that daily returns of stock prices are very noisy and often non-stationary and dependent. In addition, they do not follow certain welldefined distribution functions, such as the Gaussian distribution. To address this issue, we first aggregate the return values over a multi-day period into an average return in order to reduce the noise of daily returns. We further propose a pre-selection scheme based on stationarity, normality and independence tests in order to select a subset of stocks that have promising statistical properties. Second, we have found that optimizing the average risk in a past window does not typically generalize to future returns with minimal risks. To this end, we develop a portfolio optimization method that uses the delta changes of aggregated returns in a past window to optimize the delta changes of future expected returns. Our experimental studies show that data conditioning and aggregation in our proposed method is an effective means of improving the generalizability while simultaneously minimizing the risk of the portfolio.

C175
Jingxi Xu and Benjamin W. Wah, ``Delay-Aware Loss Concealment Strategies for Real-Time Video Conferencing,'' Proc. IEEE Int'l Symposium on Multimedia (ISM), Dec. 2011, pp. 27-34.
One-way audiovisual quality and mouth-to-ear delay (MED) are two important quality metrics in the design of real-time video-conferencing systems, and their trade-offs have significant impact on the user-perceived quality. In this paper, we address one aspect of this larger problem by developing efficient loss-concealment schemes that optimize the one-way quality under given MED and network conditions. Our experimental results show that our approach can attain significant improvements over the LARDo reference scheme that does not consider MED in its optimization.

C176
Jingxi Xu and Benjamin W. Wah, ``Exploiting Just-Noticeable Difference of Delays for Improving Quality of Experience in Video Conferencing,'' Proc. ACM Multimedia Systems Conference (MMSys), Feb. 2013, pp. 238-248.
This paper proposes a novel approach for improving the quality of experience (QoE) of real-time video conferencing systems. In these systems, QoE is affected by signal quality as well as interactivity, both depending on the packet loss rate, delay jitters, and mouth-to-ear delay (MED) that measures the sender-receiver delay on audio signals (and will be the same as that of video signals when video and audio is synchronized). We notice in the current Internet that increasing MED as well as reducing packet rate can help reduce the delay-aware loss rate in congested connections. Between the two methods, the former plays a more important role and applies well to a variety of network conditions for improving audiovisual signal quality, although overly increasing the MED will degrade interactivity. Based on a psychophysical concept called just-noticeable difference (JND), we find the extent to which MED can be increased, without humans perceiving the difference from the original conversation. The approach can be applied to improve existing video conferencing systems. Starting from the operating point of an existing system, we increase its MED to within JND in order to have more room for smoothing network delay spikes as well as recovering lost packets, without incurring noticeable degradation in interactivity. We demonstrate the idea on Skype and Windows Live Messenger by designing a traffic interceptor to extend their buffering time and to perform packet scheduling/recovery. Our experimental results show significant improvements in QoE, with much better signal quality while maintaining similar interactivity.

C177
Jingxi Xu and Benjamin W. Wah, ``Concealing Network Delays in Delay-Sensitive Online Interactive Games based on Just-Noticeable Differences,'' Proc. IEEE Int'l Conf. on Multimedia and Expo July 2013.
Online delay-sensitive games with fast interactive actions, like fighting games (FTG) and sports games, require the synchronization of coupled multi-player actions. With network impairments, action commands from other players can be delayed or lost, leading to compromised real-time perception of these games. In contrast to slower strategy games, online fighting and sports games studied in this paper require real-time judgment and instant feedbacks. Traditional methods for optimizing the delay effects of these games are focused on quantifying round-trip delays, without examining the perceptual effects of players. In this paper, we develop a new criterion using just-noticeable differences (JND) for optimizing the duration of actions and responses. Our approach aims to reduce the probability of players perceiving the delay effects, when compared to a reference game with zero network delay. Using statistics collected in offline subjective tests, the timing of actions is modified at run time. Experimental results show significant reduction of players' awareness of network delays using our approach when compared to existing delay-concealment schemes.

C178
Feihu Zhang and Benjamin W. Wah, ``Supplementary Meta-Learning: Towards a Dynamic Model for Deep Neural Networks,'' Proc. Int'l Conf. on Computer Vision Oct. 2017, pp. 4354-4363.
Data diversity in terms of types, styles, as well as radiometric, exposure and texture conditions widely exists in training and test data of vision applications. However, learning in traditional neural networks (NNs) only tries to find a model with fixed parameters that optimize the average behavior over all inputs, without using data-specific properties. In this paper, we develop a meta-level NN (MLNN) model that learns meta-knowledge on data-specific properties of images during learning and that dynamically adapts its weights during application according to the properties of the images input. MLNN consists of two parts: the dynamic supplementary NN (SNN) that learns meta-information on each type of inputs, and the fixed base-level NN (BLNN) that incorporates the meta-information from SNN into its weights at run time to realize the generalization for each type of inputs. We verify our approach using over ten network architectures under various application scenarios and loss functions. In low-level vision applications on image super-resolution and denoising, MLNN has 0.10.3 dB improvements on PSNR, whereas for high-level image classification, MLNN has accuracy improvement of 0.40.6% for Cifar10 and 1.22.1% for ImageNet when compared to convolutional NNs (CNNs). Improvements are more pronounced as the scale or diversity of data is increased.

C179
Feihu Zhang, Xiaojuan Qi, Ruigang Yang, Victor Prisacariu, Benjamin W. Wah, and Philip Torr, ``Domain-Invariant Stereo Matching Networks,'' Proc. European Conf. on Computer Vision (ECCV 2020), Lecture Notes in Computer Science, LNCS Vol 12347, Springer-Verlag, Aug. 2020, pp. 420-439.
State-of-the-art stereo matching networks have diculties in generalizing to new unseen environments due to signicant domain differences, such as color, illumination, contrast, and texture. In this paper, we aim at designing a domain-invariant stereo matching network (DSMNet) that generalizes well to unseen scenes. To achieve this goal, we propose i) a novel ``domain normalization'' approach that regularizes the distribution of learned representations to allow them to be invariant to domain dierences, and ii) an end-to-end trainable structure-preserving graph-based filter for extracting robust structural and geometric representations that can further enhance domain-invariant generalizations. When trained on synthetic data and generalized to real test sets, our model performs signicantly better than all state-of-the-art models. It even outperforms some deep neural network models (e.g. MC-CNN and DispNet) fine-tuned with test-domain data. The code is available at https://github.com/feihuzhang/DSMNet.

C180
Feihu Zhang, Jin Fang, Benjamin W. Wah, and Philip Torr, ``Deep FusionNet for Point Cloud Semantic Segmentation,'' Proc. European Conf. on Computer Vision (ECCV 2020), Lecture Notes in Computer Science, LNCS Vol 12347, Springer-Verlag, Aug. 2020
Many point cloud segmentation methods rely on transferring irregular points into a voxel-based regular representation. Although voxel-based convolutions are useful for feature aggregation, they produce ambiguous or wrong predictions if a voxel contains points from different classes. Other approaches (such as PointNets and point-wise convolutions) can take irregular points for feature learning. But their high memory and computational costs (such as for neighborhood search and ball-querying) limit their ability and accuracy for large-scale point cloud processing. To address these issues, we propose a deep fusion network architecture (FusionNet) with a unique voxel-based ``mini-PointNet'' point cloud representation and a new feature aggregation module (fusion module) for large-scale 3D semantic segmentation. Our FusionNet can learn more accurate point-wise predictions when compared to voxel-based convolutional networks. It can realize more eective feature aggregations with lower memory and computational complexity for large-scale point cloud segmentation when compared to the popular point-wise convolutions. Our experimental results show that FusionNet can take more than one million points on one GPU for training to achieve state-of-the-art accuracy on large-scale Semantic KITTI benchmark. The code will be available at https://github.com/feihuzhang/LiDARSeg.

BC15
Kumar Ganapathy and Benjamin W. Wah, ``Systematic Synthesis of Processor Arrays for Uniform Recurrence Equations,'' Transformational Approaches to Systolic Design, ed. G. M. Megson, Chapman and Hall, New York, NY, 1993, pp. 1-33.

Hardware implementations of algorithms have a tremendous impact on real-time applications where speed is of utmost importance. Systematic techniques of generating these hardware arrays are crucial for the design of large systems involving a number of these arrays, which can be visualized as hardware libraries accessible from general purpose host computer. In this chapter, we propose a systematic parameter-based approach to generate systolic architectures for problems expressible as uniform recurrences. Lower dimensional arrays to solve the recurrence can be generated optimally using our parameter-based approach. Our method results in designs whose sizes are functions of the sizes of the index space defined by the recurrences. This leads to application-specific processor arrays for every problem.

BC16
Steven R. Schwartz and Benjamin W. Wah, ``Machine Learning of Computer Vision Algorithms,'' Handbook of Pattern Recognition and Image Processing, Vol. 2: Computer Vision, T. Y. Young, ed., Academic Press, New York, NY, 1994, pp. 319-359.

Both human vision and human learning are far from fully understood, yet machine imitation of these abilities has proven very useful. Research up to now has made much progress in tracing the desired result backward to the algorithms that implement it. For example, we know how or what we want the machine to perform, but lacking a complete understanding of the biological processes, we proceed to find an algorithmic solution. In both the fields of machine leaning and computer vision this has been the case. This chapter presents an approach to integrating machine learning techniques to improve the performance of computer vision algorithms. The topics considered here are the techniques for machine learning, examples of how they can be applied, and a specific case study detailing the application of the methods. Topics not considered here include reasoning about detected objects and planning interaction with the environment.

BC17
J. A. B. Fortes, B. W. Wah, W. J. Shang, and K. Ganapathy, ``Algorithmic-Specific Parallel Processing with Linear Processor Arrays,'' Advances in Computers, vol. 33, ed. M. Yovitz, Academic Press, 1994, pp. 197-245.

Many applications of digital signal processing, scientific computing, digital communications and control are characterized by repeated execution of a small number of computationally intensive operations. In order to meet performance requirements it is often necessary to dedicate hardware with parallel processing capabilities to these specialized operations. Processor arrays, due to their structural regularity and consequent suitability for VLSI implementation, are frequently used for this purpose. Regardless of whether their structures are fixed or designed to match algorithm characteristics, it is important to understand how to map these algorithms into processor arrays. This article discusses systematic ways to derive such mappings. The techniques are illustrated by examples involving linear arrays of processors (one-dimensional processor arrays) as well as arrays of arbitrary dimensions. We present two methods for systematically mapping recurrent computations on such linearly connected processor arrays. The first method presented in this article guarantees that no more than one computation is assigned to execute in any processor at any given time but assumes enough data channels between processors for the necessary data communication. The second method considers, in addition to computational conflicts, the possibility of communication conflicts and guarantees that individual links are not required to pass more than one data item at a time.

BC18
B. W. Wah, A. Ieumwananonthachai, and T. Yu, ``Genetics-Based Learning and Statistical Generalization,'' Knowledge-Based Systems: Advanced Concepts, Techniques and Applications, S. Tzafestas (ed.), World Scientific Pub. Co., 1997, pp. 319-347.

In this chapter, we have studied statistical generalization for determining the performance of a given set of heuristic methods (HMs) found in a heuristics-design process over a problem domain. The objective of statistical generalization is to determine the ``best'' HM from among a given set of HMs over test cases in a problem domain. Due to difficulties in finding an HM that is superior all the time, the objective in learning and in generalization is relaxed to finding a subset of HMs that are better than the incumbent HM. One of the major problems in performance generalization is the dependence of performance information on the size of test cases in an application domain. Performance data obtained from test cases of different sizes do not belong to a common statistical distribution and cannot be aggregated. We address this problem by dividing test cases in a domain into subdomains and by normalizing performance data in a subdomain so that normalized data in a subdomain are independent and identically distributed. Performance data in a subdomain can then be aggregated statistically into averages.

We have proposed a statistical method for comparing the performance of HMs. Instead of using sample averages alone, which can be erroneous when sample standard deviations are large, we evaluate a measure called probability of win that depends on both the sample mean and the sample standard deviation. It measures the probability that the population mean performance of one HM is better than the population mean of another. It can also be considered as a normalized performance measure as it values is between 0 and 1. We have shown anomalies in the ordering of HMs for some normalization methods when the baseline for normalization is changed, and two normalization methods that do not have anomalies. The best choice of a baseline HM is one whose performance is invariant when new HMs are generated in learning. One such HM is one whose performance on a test case is the median performance of all HMs evaluated on this test case.

In evaluating performance of an HM across subdomains, it is difficult to find an aggregate statistical measure as performance data across subdomains can be dependent. Moreover, we do not know the weight that should be placed on each subdomain in evaluating the aggregate measure. For this reason, we have proposed to place a common performance constraint across all subdomains and accept an HM if its performance satisfies the constraint across all the subdomains. Finally, we have described the architecture of Teacher, our prototype learning system, and experimental results in learning new HMs for TimberWolf. Our results show that automated learning and generalization will lead to improved HMs.

BC19
B. W. Wah, A. Ieumwananonthachai, and Y. C. Li, ``Generalization of Heuristics Learned in Genetics Based Learning,'' Genetic Algorithms for Pattern Recognition, ed. S. K. Pal and P. Wang, CRC Press, 1996, pp. 87-126.

In this paper, we study methods for developing general heuristics in order to solve problems in knowledge-lean application domains with a large and possibly infinite problem space. Our approach is based on genetic learning that generates heuristics, tests each to a limited extent, and prunes unpromising ones from further consideration. We summarize possible sources of anomalies in performance evaluation of heuristics along with our methods for coping with them. Based on the heuristics learned, we propose and study methods for generalizing heuristics to unlearned problem domains. Our method uses a new statistical measure called probability of win, which assesses the performance of heuristics in a distribution-independent manner. To validate our approach, we show experimental results on generalizing heuristics learned for sequential circuit testing, VLSI cell placement and routing, and branch-and-bound search. We show that generalization can lead to new and robust heuristics that perform well across problem instances of different characteristics in an application domain.

BC20
A. Ieumwananonthachai and B. W. Wah, ``Statistical Generalization of Performance-Related Heuristics for Knowledge-Lean Applications,'' Evolutionary Algorithms in Engineering Applications, D. Dasgupta and Z. Michalewicz (eds.), Springer Verlag, 1997, pp. 293-313.

In this chapter, we present new results on the automated generalization of performance-related heuristics learned for knowledge-lean applications. By first applying genetics-based learning to learn new heuristics for some small subsets of test cases in a problem space, we study methods to generalize these heuristics to unlearned subdomains of test cases. Our method uses a new statistical metric called probability of win. By assessing the performance of heuristics in a range-independent and distribution-independent manner, we can compare heuristics across problem subdomains in a consistent manner. To illustrate our approach, we show experimental results on generalizing heuristics learned for sequential circuit testing, VLSI cell placement and routing, branch-and-bound search, and blind equalization. We show that generalization can lead to new and robust heuristics that perform better than the original heuristics across test cases of different characteristics.

BC21
A. Ieumwananonthachai and B. W. Wah, ``Teacher: A Genetics-Based System for Learning and for Generalizing Heuristics,'' Chapter 4 in Evolutionary Computation, Xin Yao (ed.), World Scientific Publishing Co. Pte. Ltd., pp. 124-170, 1999; Also in Soft Computing in Case Based Reasoning, S. K. Pal, D. Dillon and D. Yeung (ed.), Springer-Verlag, London, pp. 179-211, 2000.

In this chapter, we present the design of Teacher (an acronym for TEchniques for the Automated Creation of HEuRistics), a system for learning and for generalizing heuristics used in problem solving. Our system learns knowledge-lean heuristics whose performance is measured statistically. The objective of the design process is to find, under resource constraints, improved heuristic methods (HMs) as compared to existing ones. Teacher addresses five general issues in learning heuristics: (1) decomposition of a problem solver into smaller components and integration of HMs designed for each together; (2) classification of an application domain into subdomains so that performance can be evaluated statistically for each; (3) generation of new and improved HMs based on past performance information and heuristics generated; (4) evaluation of each HM's performance; and (5) performance generalization to find HMs that perform well across the entire application domain. Teacher employs a genetics-based machine learning approach and divides the design process into four phases. In the classification phase, the application domain is divided into subspaces (based on user requirements) and problem subdomains (based on the performance behavior of HMs). In the learning phase, HMs are generated and evaluated under resource constraints with a goal of discovering improved HMs. In the performance-verification phase, good HMs from the learning phase are further evaluated to acquire more accurate and more complete performance information. Finally, in the performance-generalization phase, HMs most likely to provide the best performance over the entire application domain are selected. We conclude the chapter by showing some experimental results on heuristics learned for two problems used in circuit testing.

BC22
B. W. Wah, ``Generalization'' Encyclopedia of Electrical and Electronics Engineering, J. G. Webster (ed.), John Wiley and Sons, 1998, pp. 684-692.

In this paper, we define the generalization problem, summarize various approaches in generalization, identify the credit assignment problem, and present the problem and some solutions in measuring generalizability. We discuss anomalies in the ordering of hypotheses in a subdomain when performance is normalized and averaged, and show conditions under which anomalies can be eliminated. To generalize performance across subdomains, we present a measure called probability of win that measures the probability whether a hypothesis is better than another. Finally, we discuss some limitations in using the probabilities of win.

BC24
J. Gu, P. W. Purdom, J. Franco, and B. W. Wah, ``Algorithms for the Satisfiability (SAT) Problem'' Chap. 17.8 in Handbook of Applied Optimization, P. M. Pardalos and M. G. C. Resende (ed), Oxford University Press, Oxford, UK, 2002, pp. 640-660.

The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided design, computer-aided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. In this article, we describe four major classes of algorithms: variable setting and resolution procedures, local search algorithms, nonlinear unconstrained methods, and nonlinear constrained techniques, for the SAT problem. A number of advanced techniques for SAT problems, such as general Boolean representations, the binary decision diagram (BDD), multispace search method, parallel SAT algorithms and architectures, and the multiSAT algorithm, are described in more comprehensive discussions (Gu et al., 1997a,b, 2000).

BC25
B. W. Wah, ``Constrained Genetic Algorithms and their Applications in Nonlinear Constrained Optimization,'' in Evolutionary Computation, X. Yao and R. Sarker (ed.), Kluwer Academic Press, 2001, pp. 253-275.

This chapter presents a framework that unifies various search mechanisms for solving constrained nonlinear programming (NLP) problems. These problems are characterized by functions that are not necessarily differentiable and continuous. Our proposed framework is based on the first-order necessary and sufficient condition developed for constrained local minimization in discrete space that shows the equivalence between discrete-neighborhood saddle points and constrained local minima. To look for discrete-neighborhood saddle points, we formulate a discrete constrained NLP in an augmented Lagrangian function and study various mechanisms for performing ascents of the augmented function in the original-variable subspace and descents in the Lagrange-multiplier subspace. Our results show that CSAGA, a combined constrained simulated annealing and genetic algorithm, performs well when using crossovers, mutations, and annealing to generate trial points. Finally, we apply iterative deepening to determine the optimal number of generations in CSAGA and show that its performance is robust with respect to changes in population size.

BC26
B. W. Wah and M. L. Qian, ``Chapter 17: Constraint-Based Neural Network Learning For Time Series Predictions,'' in Handbook of Intelligent IT, N. Zhong and J. Liu (ed.), Springer-Verlag, 2004, pp. 401-420.

In this chapter, we have surveyed briefly previous work in predicting noise-free piecewise chaotic time series and noisy time series with high frequency random noise. For noise-free time series, we have proposed a constrained formulation for neural-network learning that incorporates the error of each learning pattern as a constraint, a new cross-validation scheme that allows multiple validations sets to be considered in learning, a recurrent FIR neural-network architecture that combines a recurrent structure and a memory-based FIR structure, and a violation-guided back propagation algorithm for searching in constrained space of the formulation. For noisy time series, we have studied systematically the edge effect due to low-pass filtering of noisy time series and have developed an approach that incorporates constraints on predicting low-pass data in the lag period. The new constraints enable active training in the lag period that greatly improves the prediction accuracy in the lag period. Finally, experimental results show significant improvements in prediction accuracy on standard benchmarks and stock-price time series.

BC27
B. W. Wah and Y. X. Chen ``The Evaluation of Partitioned Temporal Planning Problems in Discrete Space and its Application in ASPEN,'' in Frontiers in Artificial Intelligence and Applications, vol. 112, W. X. Zhang and V. Sorge (ed.), IOS Press, pp. 109-123, 2004.

In this paper, we study the partitioning and evaluation of discrete-space temporal planning problems and illustrate our techniques on ASPEN, an objective-based planner developed at the Jet Propulsion Laboratory for the automated planning and scheduling of complex spacecraft control operations. We formulate these planning problems as single or multi-objective dynamic optimization problems and propose the necessary and sufficient extended saddle point condition (ESPC) that governs the correctness of locally optimal plans. We then decompose the ESPC into partitioned ESPC, one for each stage, and show that they are necessary and sufficient collectively. By utilizing these partitioned conditions, we present efficient search algorithms whose complexity, despite exponential, has a much smaller base as compared to that without using the conditions. Finally, we demonstrate the performance of our approach by integrating it in the ASPEN planner and show significant improvements in CPU time and solution quality on some spacecraft scheduling and planning benchmarks.

BC28
B. W. Wah, Y. X. Chen, and T. Wang ``Theory and Applications of Simulated Annealing for Nonlinear Constrained Optimization,'' in Simulated Annealing, C. M. Tan (ed.), I-Tech Education and Publishing, Vienna, Austria, 2008, pp. 155-186.

In this chapter, we present constrained simulated annealing (CSA), an algorithm that extends conventional simulated annealing to look for constrained local minima of constrained optimization problems. The algorithm is based on the theory of extended saddle points (ESPs) that shows the one-to-one correspondence between a constrained local minimum of the problem and an ESP of the corresponding penalty function. CSA finds ESPs by systematically controlling probabilistic descents in the original variable space of the penalty function and probabilistic ascents in the penalty space. Based on the decomposition of the necessary and sufficient ESP condition into multiple necessary conditions, we also describe constraint-partitioned simulated annealing (CPSA) that exploits the locality of constraints in nonlinear optimization problems. CPSA leads to much lower complexity as compared to that of CSA by partitioning the constraints of a problem into exponentially simpler subproblems, solving each independently, and resolving those violated global constraints across the subproblems. We evaluate CSA and CPSA by applying them to solve some continuous constrained optimization problems and compare their performance to that of other penalty methods. Finally, we apply CSA to solve two real-world applications, one on sensor-network placement design and another on out-of-core compiler code generation.

BC29
N. Zhong, J. Liu, Y. Yao, J. Wu, S. Lu, Y. Qin, K. Li, and B. W. Wah, ``Web Intelligence Meets Brain Informatics,'' Web Intelligence Meets Brain Informatics, Z. Zhong, J. Liu, Y. Yao, J. Wu, S. Lu, and K. Li (ed.), LNAI 4845 State of the Art Survey, Springer, 2008, pp. 1-31.

In this chapter, we outline a vision of Web Intelligence (WI) research from the viewpoint of Brain Informatics (BI), a new interdisciplinary field that systematically studies the mechanisms of human information processing from both the macro and micro viewpoints by combining experimental cognitive neuroscience with advanced information technology. BI studies human brain from the viewpoint of informatics (i.e., human brain is an information processing system) and uses informatics (i.e., WI centric information technology) to support brain science study. Advances in instrumentation, e.g ., based fMRI and information technologies offer more opportunities for research in both Web intelligence and brain sciences. Further understanding of human intelligence through brain sciences fosters innovative Web intelligence research and development. WI portal techniques provide a powerful new platform for brain sciences. The synergy between WI and BI advances our ways of analyzing and understanding of data, knowledge, intelligence, and wisdom, as well as their interrelationships, organizations, and creation processes. Web intelligence is becoming a central field that revolutionizes information technologies and artificial Intelligence to achieve human-level Web intelligence.

TB02
Wee Hong Yeo, ``New Piggybacking Algorithm in VoIP using Enhanced {G.722.2} Codec with Larger Frames,'' B.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana-Champaign, May 2009
In this thesis, we present the design of a new piggybacking algorithm implemented on the G.722.2 VoIP codec. In a piggybacking algorithm, multiple speech frames that include those transmitted in the past are encapsulated in a single packet. Because redundant copies of each frame are transmitted to the receiver, the receiver can recover frames lost when one or more packets are lost or arrive late in transmission. In this thesis, we have developed a codec that removes further redundancies in the multiple frames encapsulated in piggybacking by sending only one set of Linear Prediction (LP) coefficients for all subframes encapsulated. We create multiple versions of the codec, each working with a different frame size. Our new codec can encode the multiple frames with little degradation in Perceptual Evaluation Speech Quality (PESQ), while having substantially more bit savings. The performance of the new algorithm is evaluated against the original method of piggybacking over random losses, as well as that using packet traces collected over PlanetLab.

TP8
Pankaj Mehra, ``Automated Learning of Load-Balancing Strategies For A Distributed Computer System,'' Ph.D. Thesis, Department of Computer Science, University of Illinois at Urbana-Champaign, December, 1992.

Workstations interconnected by a local-area network are the most common examples of distributed systems. The performance of such systems can be improved via load balancing, which migrates tasks from the heavily loaded sites to the lightly loaded ones. Load-balancing strategies have two components: load indices and migration policies. This thesis presents SMALL (Systematic Method for Automated Learning of Load-balancing strategies), a system that learns new load indices and tunes the parameters of given migration policies. The key component of SMALL is DWG, a dynamic workload generator that allows off-line measurement of task-completion times under a wide variety of precisely controlled loading conditions. The data collected using DWG are used for training comparator neural networks, a novel architecture for learning to compare functions of time series. After training, the outputs of these networks can be used as load indices. Finally, the load-index traces generated by the comparator networks are used for tuning the parameters of given load-balancing policies. In this final phase, SMALL interfaces with the Teacher system of Wah, et al. in order to search the space of possible parameters using a combination of point-based and population-based approaches. Together, the components of SMALL constitute an automated strategy-learning system for performance-driven improvement of existing load-balancing software.

TP9
Kumar Ganapathy, ``Mapping Regular Recursive Algorithms to Fine-Grained Processor Arrays,'' Ph.D. Thesis, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, May 1994.

With the continuing growth of VLSI technology, special-purpose parallel processors have become a promising approach in the quest for high performance. Fine-grained processor arrays have become popular as they are suitable for solving problems with a high degree of parallelism, and can be inexpensively built using custom designs or commercially available field programmable gate arrays (FPGA). Such specialized designs are often required in portable computing and communication systems with real-time constraints, as software-controlled processors often fail to provide the necessary throughput. This thesis addresses many issues in designing such application-specific systems built with fine-grained processor arrays for regular recursive uniform dependence algorithms. A uniform dependence algorithm consists of a set of indexed computations and a set of uniform dependence vectors which are independent of the indices of computations. Many important applications in signal/image processing, communications, and scientific computing can be formulated as uniform dependence algorithms.

The first part of this thesis addresses the problem of designing algorithm-specific processor arrays. A systematic parameter-based method, called the General Parameter Method (GPM), to design optimal, lower-dimensional processor arrays for uniform dependence algorithms has been developed. The GPM can be used to derive optimal arrays for any user-specified objective expressed in terms of the parameters. The proposed approach employs an efficient search technique to explore the design space and arrive at the optimal designs. Equivalence between the parameter and dependence-based methods can be used to find optimal designs in the dependence-based approaches. The GPM has also been extended to derive optimal two-level pipelined algorithm-specific processor arrays. Such two-level pipelined arrays can be clocked at higher rates than can nonpipelined designs for real-time applications.

The second part of this thesis presents a parallel VLSI architecture for a general-purpose coprocessor for uniform dependence algorithms. The architecture consists of a linear array of processors and a linear chain of buffer memories organized as FIFO queues to store the buffered data. Such an architecture is advantageous from the point of view of scalability and wafer-level integration. A distinguishing feature is the assumption of a limited-bandwidth interface to external memory modules for accessing the data. Such an assumption allows the coprocessor to be integrated easily into existing systems. Efficient techniques to partition the dependence graph into blocks, sequence the blocks through the buffer memory to reduce the number of data accesses to main memory, and map the blocks using GPM have been developed. An important result obtained is the square-root relationship between clock-rate reduction and area of the coprocessor under fixed main-memory bandwidth. From the square-root relationship, it can found that the system yield improves with the area of the coprocessor when chip yield decreases as the inverse square of the clock frequency. Results on matrix-product and transitive-closure applications indicate that the coprocessor can be used to deliver higher speedup or lower clock rate than a reference one-processor design. Thus, the coprocessor can be used as a general-purpose back-end accelerator for loop-based matrix algorithms.

TP10
Lon-Chan Chu, ``Algorithms for Combinatorial Optimization in Real Time and their Automated Refinements by Genetics-Based Learning,'' Ph.D. Thesis, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, August 1994.

The goal of this research is to develop a systematic, integrated method of designing efficient search algorithms that solve optimization problems in real time. Search algorithms studied in this thesis comprise meta-control and primitive search. The class of optimization problems addressed are called combinatorial optimization problems, examples of which include many NP-hard scheduling and planning problems, and problems in operations research and artificial-intelligence applications. The problems we have addressed have a well-defined problem objective and a finite set of well-defined problem constraints. In this research, we use state-space trees as problem representations. The approach we have undertaken in designing efficient search algorithms is an engineering approach and consists of two phases: (a) designing generic search algorithms, and (b) improving by genetics-based machine learning methods parametric heuristics used in the search algorithms designed. Our approach is a systematic method that integrates domain knowledge, search techniques, and automated learning techniques for designing better search algorithms. Knowledge captured in designing one search algorithm can be carried over for designing new ones.

TP11
Arthur Ieumwananonthachai, ``Automated Design of Knowledge-Lean Heuristics: Learning, Resource Scheduling, and Generalization,'' Ph.D. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana, IL, May 1996.

In this thesis we present new methods for the automated design of new heuristics in knowledge-lean applications and for finding heuristics that can be generalized to unlearned test cases. These applications lack domain knowledge for credit assignment; hence, operators for composing new heuristics are generally model free, domain independent, and syntactic in nature. The operators we have used are genetics based; examples of which include mutation and crossover. Learning is based on a generate-and-test paradigm that maintains a pool of competing heuristics, tests them to a limited extent, creates new ones from those that perform well in the past, and prunes poor ones from the pool. We have studied four important issues in learning better heuristics: (a) partitioning of a problem domain into smaller subsets, called subdomains, so that performance values within each subdomain can be evaluated statistically, (b) anomalies in performance evaluation within a subdomain, (c) rational scheduling of limited computational resources in testing candidate heuristics in single-objective as well as multi-objective learning, and (d) finding heuristics that can be generalized to unlearned subdomains.

We show experimental results in learning better heuristics for (a) process placement for distributed-memory multicomputers, (b) node decomposition in a branch-and-bound search, (c) generation of test patterns in VLSI circuit testing, (d) VLSI cell placement and routing, and (e) blind equalization.

TP12
Yi Shang, ``Global Search Methods for Solving Nonlinear Optimization Problems,'' Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, Aug. 1997.

In this thesis, we present new methods for solving nonlinear optimization problems. These problems are difficult to solve because the nonlinear constraints form feasible regions that are difficult to find, and the nonlinear objectives contain local minima that trap descent-type search methods. In order to find good solutions in nonlinear optimization, we focus on the following two key issues: how to handle nonlinear constraints and how to escape from local minima. We use a Lagrange-multiplier-based formulation to handle nonlinear constraints, and develop Lagrangian methods with dynamic control to provide faster and more robust convergence. We extend the traditional Lagrangian theory for the continuous space to the discrete space and develop efficient discrete Lagrangian methods. To overcome local minima, we design a new trace-based global-search method that relies on an external traveling trace to pull a search trajectory out of a local optimum in a continuous fashion without having to restart the search from a new starting point. Good starting points identified in the global search are used in the local search to identify true local optima. By combining these new methods, we develop a prototype, called Novel (Nonlinear Optimization Via External Lead), that solves nonlinear constrained and unconstrained problems in a unified framework.

We show experimental results in applying Novel to solve nonlinear optimization problems, including (a) the learning of feedforward neural networks, (b) the design of quadrature-mirror-filter digital filter banks, (c) the satisfiability problem, (d) the maximum satisfiability problem, and (e) the design of multiplierless quadrature-mirror-filter digital filter banks. Our method achieves better solutions than existing methods, or achieves solutions of the same quality but at a lower cost.

TP13
Tao Wang, ``Global Optimization for Constrained Nonlinear Programming,'' Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, Dec. 2000.

In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM_{dn}) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM_{dn}) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM_{dn}, leading to the first-order necessary and sufficient condition for CLM_{dn}.

To find a CGM_{dn}, CSA searches for a discrete saddle point with the minimum objective value by carrying out both probabilistic descents in the original-variable space of a discrete augmented Lagrangian function and probabilistic ascents in the Lagrange-multiplier space. We prove that CSA converges asymptotically to a CGM_{dn} with probability one. We also extend CSA to solve continuous and mixed-integer constrained NLPs. By achieving asymptotic convergence, CSA represents one of the major developments in nonlinear constrained global optimization today, which complements simulated annealing (SA) in unconstrained global optimization.

Based on CSA, we have studied various strategies of CSA and their trade-offs for solving continuous, discrete, and mixed-integer NLPs. The strategies evaluated include adaptive neighborhoods, distributions to control sampling, acceptance probabilities, and cooling schedules. An optimization software package based on CSA and its various strategies has been implemented.

Finally, we apply CSA to solve a collection of engineering application benchmarks and design filters for subband image coding. Much better results have been reported in comparison with other existing methods.

TP14
Zhe Wu, ``The Theory and Applications of Discrete Constrained Optimization using Lagrange Multipliers,'' Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, May 2001.

In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated first-order search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixed-integer space. The constrained problems are general in the sense that they do not assume the differentiability or convexity of functions. Our proposed theory and methods are targeted at discrete problems and can be extended to continuous and mixed-integer problems by coding continuous variables using a floating-point representation (discretization). We have characterized the errors incurred due to such discretization and have proved that there exists upper bounds on the errors. Hence, continuous and mixed-integer constrained problems, as well as discrete ones, can be handled by DLM in a unified way with bounded errors.

Starting from new definitions on discrete neighborhoods, constrained local minima in discrete space, and new generalized augmented Lagrangian function, we have developed new discrete-space first-order necessary and sufficient conditions that are able to characterize all constrained local minima in discrete space. Our proposed first-order conditions show a one-to-one correspondence between a discrete-space constrained local minimum, a discrete-space saddle point, and a solution to the first-order conditions. They are important because they allow us to transform the difficult problem of looking for constrained local minima in the original-variable space to the easier problem of looking for saddle points in the discrete Lagrangian space. They provide a solid foundation for DLM to solve general constrained problems that cannot be achieved in the conventional theory of Lagrange-multipliers for solving continuous constrained nonlinear programming problems.

Finally, we demonstrate the efficiency and effectiveness of our proposed theory and methods. DLM is able to solve systematically general discrete, continuous and mixed-integer constrained benchmarks, which is a task not achieved by previous methods. DLM has found better multiplierless filter-bank designs that improve over all of Johnston's benchmark designs using a maximum of three to six ONE bits in each filter coefficient instead of using floating-point representations. Finally, DLM has found efficiently new solutions for satisfiability problems that were not possible by existing local- and global search techniques.

TP15
Xiao Su, ``Error Concealments for Robust Image and Video Transmissions on the Internet,'' Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, Sept. 2001.

Coding and transmission of images and videos have been a popular but very challenging topic. Traditional coding algorithms are usually designed to optimize compression ratio in an error-free environment but not for the current Internet that is only a best-effort, packet-switched and unreliable network. This conflict presents a number of challenges for high-quality image and video transmissions.

Information loss and bandwidth limitation are two major factors that affect the quality of video streaming. In this thesis, we have developed various error concealment and reconstruction-based rate control schemes to address these two issues. First, we have proposed a sender-receiver based approach for designing a multiple-description video coder that facilitates recovery of packet losses. Second, we have studied artificial neural network-based reconstruction algorithms for compensating nonlinear compression losses due to multiple-description coding. Third, we have employed syntax-based packetization and decoder-side feedback for reducing propagation losses. Last, we have incorporated the reconstruction process in the design and evaluation of rate control schemes.

Likewise, delay and packet loss are two primary concerns for image transmissions. Existing approaches for delivering single-description coded images by TCP give superior quality but very long delays when the network is unreliable. To reduce delays, we have proposed to use UDP to deliver sender-receiver based multiple-description coded images. To further improve the trade-offs between delay and quality of either TCP delivery or UDP delivery, we have investigated a combined TCP/UDP delivery that can give shorter delay than pure TCP delivery and better quality than pure UDP delivery.

TP16
Dong Lin, ``Loss Concealments for Low Bit-Rate Packet Voice,'' Ph.D. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana, IL, Aug. 2002.

In recent years, packet voice over the Internet has become an attractive alternative to conventional public telephony. However, the Internet is a best-effort network, with no guarantee on its quality of service. A fundamental issue in real-time interactive voice transmissions over an unreliable Internet Protocol (IP) network is the loss, or late arrival of packets. This problem is especially serious in transmitting low bit-rate coded speech when pervasive dependencies are introduced in a bit stream, leading to errors propagated to subsequent frames when a packet is lost. As a result, the study of loss- concealment schemes is important in ensuring high-quality playback.

In this research, we choose an end-to-end loss-concealment approach that requires no special support from the underlying network. In particular, we focus on developing a nonredundant sender-receiver collaborated multiple-description coding (MDC) scheme.

We propose a new coder-dependent parameter-based MDC. Based on the correlations of coding parameters, we generate multiple descriptions systematically. In particular, we have observed high inter-frame correlations in linear predictors of low bit- rate coders, but low correlations in excitation parameters. Hence, we generate multiple descriptions at the sender side by interleaving linear predictors, and reconstruct lost ones at the receiver side by linear interpolations. In addition, we replicate hard-to-reconstruct excitation parameters to multiple descriptions. We also require our design to be done in such a way that does not require extra transmission bandwidth and that can adapt its number of descriptions to network-loss conditions dynamically. We have compared three linear predictor representations, Reflection Coefficient, Log Area Ratio, and Line Spectral Pair (LSP), by their reconstruction qualities, and have found that the LSP representation performs the best. Finally, we have studied LSP reconstructions and enhancements on excitation quality. Extensive tests on FS CELP, ITU G.723.1, and FS MELP under different loss scenarios have shown good quality and reliability of our proposed scheme.

TP17
Minglun Qian, ``Neural Network Learning For Time-Series Predictions Using Constrained Formulations,'' Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, May 2005.

In this thesis, we propose a recurrent FIR neural network, develop a constrained formulation for neural network learning, study an efficient violation guided backpropagation algorithm for solving the constrained formulation based on the theory of extended saddle points, and apply neural network learning for predicting both noise-free time series and noisy time series. The recurrent FIR neural-network architecture combines a recurrent structure and a memory-based FIR structure in order to provide a more powerful modeling ability. The constrained formulation for neural network learning incorporates the error of each learning pattern as a constraint, a new cross-validation scheme that allows multiple validations sets to be considered in learning, and new constraints that can be expressed in a procedure form. The violation-guided back propagation algorithm first transforms the constrained formulation into an $l_1$-penalty function, and searches for a saddle point of the penalty function.

When using a constrained formulation along with violation guided backpropagation to neural network learning for near noiseless time-series benchmarks, we achieve much improved prediction performance as compared to that of previous work, while using less parameters. For noisy time-series, such as financial time series, we have studied systematically trade-offs between denoising and information preservation, and have proposed three preprocessing techniques for time-series with high-frequency noise. In particular, we have proposed a novel approach by first decomposing a noisy time series into different frequency channels and by preprocessing each channel adaptively according to its level of noise. We incorporate constraints on predicting low-pass data in the lag period when a low-pass filter is employed to denoise the band. The new constraints enable active training in the lag period that greatly improves the prediction accuracy in the lag period. Extensive prediction experiments on financial time series have been conducted to exploit the modeling ability of neural networks, and promising results have been obtained.

TP18
Yixin Chen, ``Solving Nonlinear Constrained Optimization Problems through Constraint Partitioning,'' Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, Sept. 2005.

In this dissertation, we propose a general approach that can significantly reduce the complexity in solving discrete, continuous, and mixed constrained nonlinear optimization (NLP) problems. A key observation we have made is that most application-based NLPs have structured arrangements of constraints. For example, constraints in AI planning are often localized into coherent groups based on their corresponding subgoals. In engineering design problems, such as the design of a power plant, most constraints exhibit a spatial structure based on the layout of the physical components. In optimal control applications, constraints are localized by stages or time.

We have developed techniques to exploit these constraint structures by partitioning the constraints into subproblems related by global constraints. Constraint partitioning leads to much relaxed subproblems that are significantly easier to solve. However, there exist global constraints relating multiple subproblems that must be resolved. Previous methods cannot exploit such structures using constraint partitioning because they cannot resolve inconsistent global constraints efficiently.

We have developed a mathematical theory that provides strong necessary and sufficient analytical conditions for limiting the subspace to be searched when resolving the global constraints. We have proposed the theory of extended saddle points (ESP) to support constraint partitioning when solving NLPs. Based on a novel penalty formulation, ESP offers a necessary and sufficient condition for constrained local optima of NLPs in discrete, continuous, and mixed spaces. It facilitates constraint partitioning by providing a set of necessary conditions, one for each subproblem, to characterize the local optima. It further reduces the complexity by defining a much smaller search space in each subproblem for backtracking. Since resolving the global constraints only incurs a small amount of overhead, our approach leads to a significant reduction of complexity.

Our partition-and-resolve approach has achieved substantial improvements over existing methods in solving AI planning and mathematical programming problems. In this dissertation, we present SGPlan, a planner based on constraint partitioning that has significantly improved the solution time and quality on many application domains when compared to other leading planners. We also describe our implementation that has successfully incorporated ESP with ASPEN, a planning system for spacecraft exploration developed by NASA. The ESP planner performs 100 to 1000 times faster than the original ASPEN on NASA's benchmarks and can generate solutions of much higher quality.

Constraint partitioning has led to a major breakthrough in solving mathematical programming problems in operations research and engineering applications. In this dissertation, we have applied our method to solve some large-scale continuous and mixed-integer NLPs in standard benchmarks. We have solved some large-scale problems that were not solvable by other leading optimization packages and have improved the solution quality on many problems.

TP19
Batu Sat, ``Design and Evaluation of VoIP Systems with High Perceptual Conversational Quality,'' Ph.D. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana, IL, Aug. 2010.

This research addresses real-time multimedia communication systems that can achieve high perceptual quality for their users. It focuses on the fundamental understanding of multiple quality aspects perceived and their trade-offs in the design of run-time control schemes that adapt to changing network conditions and expectations of the users of a system.

One of the main contributions of this thesis is the use of adaptively scheduled off-line subjective comparison tests to efficiently and accurately learn users' subjective preferences among the alternative trade-offs in order to guide the run-time control schemes to achieve high perceptual quality.

In order to illustrate the application of the general framework and our methodology, throughout this thesis we study the design of real-time VoIP (voice-over-IP) systems that can achieve high perceptual conversational quality. The trade-offs in the design and operation of a VoIP system involve the design of speech codecs and strategies for network control, playout scheduling, and loss concealment.

The perceptual quality of a conversation over a network connection depends on the quality of the one-way speech (listening-only speech quality or LOSQ) received and the delay incurred from the mouth of the speaker to the ear of the listener (mouth to ear delay or MED). In a conversation, each participant takes turns in speaking and listening, and both perceive a silence duration called {mutual silence} (MS) when the conversation switches from one party to another. When the connection has delays, the MSs perceived by a participant consist of alternating short and long silence periods between turns. As a result, listeners may experience lower perceptual quality when the MSs are highly asymmetric, some speakers appearing to be more distant than others, or some responding slower than others.

The evaluation of conversational quality is a largely unexplored area. There are many objective metrics for assessing the quality of a VoIP conversation, but there is no single objective metric whose results match well with subjective results. Indiscriminate subjective testing is not feasible because it is prohibitively expensive to carry out many such tests under various conversational and network conditions. Moreover, there is no systematic method to generalize the subjective test results to unseen conditions. To this end, there are five key innovations in this research that will address the limitations of existing work and improve the perceptual quality of VoIP systems.

Firstly, we have developed a methodology and test-bed to objectively and subjectively evaluate VoIP systems under repeatable and fair conditions using a black-box approach. %New We have applied this methodology and test-bed on four commonly used VoIP systems: Skype, Google-Talk, Windows Live and Yahoo Messenger. The results show that different systems operate differently in coping with network imperfections such as jitter and loss. We also observe that systems do not change their operation as a function of the one-way delay of the connection, or of the turn-taking frequency of the conversation. The results show that Windows Live is preferred under a significant set of conditions, but none of the systems is dominant under all conditions.

We have also learned the mapping between easily obtainable objective measures and subjective preferences of users in order to allow any VoIP system to be comprehensively compared against others without expensive subjective comparisons. %New We later use the mapping learned to subjectively compare our newly designed system with the four systems already evaluated.

Secondly, we have developed a general model of pair-wise subjective comparisons, based on individually identified properties, axioms and lemmas, that models comparisons on a continuous operating curve with a single control parameter. The model provides a basis for developing the method to schedule adaptive off-line subjective tests and for identifying the optimal point by fusing the information obtained from separate subjective evaluations on the same operating curve. The model can be used for formulating and solving any type of pair-wise comparison problem that exhibits the same properties identified. The model is flexible to allow the existence of multiple optimal points on an operating curve and includes a belief function framework that can guide the search for optimal points efficiently. Furthermore, the model is built on a statistical framework that allows for the confidence of individual evaluation results to be represented in the conclusiveness of the combined belief function.

Thirdly, we have developed a method for tackling the control design problem of finding the optimal point in an N-dimensional space, which includes all the metrics that affect quality. The overall problem is transformed into two independent problems of finding the optimal point on a continuous but one-dimensional curve, and learning the mapping on a set of curves that adequately spans the K-dimensional ($K Fourthly, we have applied the methodology to the design of play-out scheduling control for VoIP systems by conducting subjective tests and by learning from them under a comprehensive set of network and conversational conditions. We have also verified the accuracy and efficiency of our methodology using exhaustive subjective tests on a subset of the network and conversational conditions. The verification of our methodology on a real-life application also justified our model of pair-wise subjective comparisons and our optimal algorithm to adaptively choose upcoming comparisons using information learned from previously conducted tests.

Lastly, we have generalized the learned results in the design of play-out scheduling control to conditions that are unseen at design time and observed only at run-time. Along with design choices in other components of the architecture, this results in a VoIP system that can achieve higher and more consistent perceptual quality than other four VoIP systems analyzed. We have also shown that our system performs very close to a non-causal ideal system where the POS decisions are made optimally using future information.

Our model and methodologies are applicable to a wide variety of real-time multimedia communication system-control design problems which operate under constrained resources, communicating over non-stationary IP networks, and for which the overall quality perceived by users of the system has multiple counteracting aspects which cannot be represented by a single objective metric.

TP20
Chih-Wei Hsu, ``Solving Automated Planning Problems with Parallel Decomposition,'' Ph.D. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, Dec. 2011.
In this dissertation, we present a parallel decomposition method to address the complexity of solving automated planning problems. We have found many planning problems have good locality which means their actions can be clustered in such a way that nearby actions in the solution plan are usually also from the same cluster. We have also observed that the problem structure is regular and has lots of repetitions. The repetitions come from symmetric objects in the planning problem and a simplified instance with similar problem structure can be generated by reducing the number of symmetric objects. We improve heuristic search in planning by utilizing locality and symmetry and applying parallel decomposition. Our parallel decomposition approach exploits these structural properties in a domain-independent way in three steps: action partitioning, constraint resolution, and subproblem solutions. In each step, we propose solutions to exploit localities and symmetries for minimizing solution time. Our key contribution lies in the design of simplification and generalization procedures to find good heuristics in action partitioning and constraint resolution. In application of our method to solve propositional and temporal planning problems in three of the past International Planning Competitions, our results show that SGPlan6, our proposed planner, can solve more instances than other top planners. We demonstrate SGPlan6 performs well when action partitioning is useful in decreasing heuristic value. We also show SGPlan6 can achieve better quality-time trade-off. By using the symmetry and locality, we are able to achieve good coverage using our domain-independent planner but still have good performance like domain-specific planners.

TP21
Jingxi Xu ``Optimizing Perceptual Quality for Online Multimedia Systems with Fast-Paced Interactions,'' Ph.D. Thesis, Dept. of Computer Science and Engineering, Chinese Univ. of Hong Kong, Hong Kong, Aug. 2017.
This research addresses the optimization of perceptual quality of online interactive multimedia applications with fast-paced interactions, including but not limited to voice-over-IP, videoconferencing, and multi-player online games. The main characteristic of these applications is that they need real-time performance and thus have limited time for the evaluation, calculation, and optimization of perceptual quality, where perceptual quality is the subjective satisfaction of user experience. It involves complex mapping from system controls to human perception that is difficult to model in a closed form. Rather, it is measured by offline pairwise subjective tests with considerably large overhead. To optimize the perceptual quality of applications interested in this thesis, we need an efficient methodology to collect sufficient human opinions regarding system controls, and generalize the opinions at run time per the running context and the network condition, and optimize the perceptual quality accordingly. In this thesis, we propose a general framework for maintaining a stable network transmission for an interactive multimedia application. Based on a large scale measurement of nowadays Internet conditions, we develop a real-time algorithm in this layer for concealing network impairments based on run-time statistics. Secondly, to model human opinions, we propose a probabilistic model called Just-Noticeable Difference (JND) surface, which is a function that maps the change of a control to the subjective awareness of the change. Along with a dominance property that describes the monotonicity of the data in the JND surface, we utilize an efficient algorithm for measuring the human opinions over the whole control space with only a small number of offline subjective tests. Thirdly, to optimize perceptual quality at run time, we utilize another dominance property we have discovered to combine JND surfaces corresponding to a single independent control or multiple dependent controls. Fourthly, we generalize the JND surfaces from the offline measured surfaces per the run-time network condition by transformations, and then combine them using probabilistic tools. The resulting combined JND surface is used in the objective function for the optimization of perceptual quality. To verify the techniques and algorithms developed, we demonstrate their advantages with both proprietary VoIP systems including Skype and MSN, and an online game BZFlag.

TM9
Steven R. Schwartz, ``Resource Constrained Parameter Tuning Applied to Stereo Vision,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, August 1991.

This thesis presents a learning framework for resource constrained parameter tuning of a general stereo vision algorithm. Stereo vision is an inexpensive method for calculating depth information in a scene by triangulat- ing features from two different viewpoints. This depth information is an important part of 3D object recognition used in image understanding applications. One of the significant difficulties in stereo vision, as well as in other vision algorithms, lies in adjusting the variety of parameters to maximize performance. Typically, this task is the responsibility of the designer, and it can entail extensive work in tuning the algorithm over the given image domain. This effort is performed with a minimum of formal guidelines, relying primarily on the skill and patience of the designer. The solution proposed here is a system called Teacher 4.2 (TEchniques for the Automatic Creation of HEuRistics). It consists of a statistical method for systematically and intelligently improving the parameters of the vision algorithm. This method operates under a user-specified deadline, automatically making the trade-off between the number of new parameter sets to consider and the degree of testing to perform on each. Testing time is divided into stages in which subtests are performed on the parameter sets. The first stage generates new parameter sets as necessary and performs initial tests. Successive stages select a subset of the top parameter sets from the previous stage and continue testing. The final result is the parameter set deemed best by the final stage. Various scheduling strategies are explored within the stage framework, including one based on minimizing the statistical risk of the performance of the resulting parameter set being incorrect. The results obtained by this approach show that a significant performance improvement can be obtained by automatic and systematic exploration of the algorithm parameter space.

TM10
Chin-Chi Teng, ``Mixed-Mode Supervised Learning Algorithms for Multi-Layer Feed-Forward Neural Networks,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana, IL, Aug. 1993.

In this thesis, we present a new supervised learning mechanism for feed-forward neural networks. Our learning mechanism is different from those for traditional supervised learning algorithms (such as the back-propagation algorithm) that find a one-to-one mapping between the given input pattern matrix and the desired output pattern matrix. Instead, it finds one of the one-to-many mappings between the input matrix and an intermediate output matrix and transforms the intermediate output matrix into the desired output matrix using linear algebra techniques. The major contributions of this thesis are the design and experimentation of a new supervised learning mechanism for improving the performance of current supervised learning algorithms. These improvements include: (1) reduction of the learning time for some existing supervised learning algorithms (such as backpropagation and Quickprop); (2) reduction of the number of hidden units needed for convergence for the cascade-correlation learning algorithm. Further, our extensive experimental results show that our learning algorithm converges to within a reasonable range of error after a few training epochs, making it suitable for dynamic real-time applications in which the network may have to be re-trained periodically.

TM11
Scott Allen Yenerich, ``ATM Network Performance Simulator,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana, IL, June 1995.

This thesis presents a system that can be used to simulate the traffic conditions that may be observed by applications. In addition, using a real application, we have examined the performance of a network with ATM-like transmission rates under various traffic loads. The simulation system is designed as a layer below the application layer. The simulator is passed the message or packet from the sending application. The simulation is performed; the packet is then sent to the simulator at the receiving end. This copy of the simulator forwards the packet to the receiving application. The simulation itself consists of attempting to send the packet through the ATM-like network in the presence of other traffic. The characteristics of the traffic on the network are created with the use of several variable parameters. The measured result is the total time required to send the entire packet. This result has two components: the time the packet spends waiting to be sent, and the time it takes to transmit the entire packet.

TM12
Xiao Su, ``Design and Evaluation of a Window-Based Wireless Medium Access Control Protocol,'' M.Sc. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, May 1997.

The proliferation of wireless devices calls for a suitable network infrastructure for supporting communication among them. Medium Access Control(MAC), the most crucial part of the protocol stack, is discussed in the thesis. The design of wireless MAC protocols is significantly different from wired MAC protocols because of distinctive features of wireless medium, such as high error rates, limited radio propagation range and low bandwidth.

An efficient, scalable and fair protocol -- Wireless Window Protocol (WWP) -- is proposed. WWP is a limited contention protocol in which the set of contending mobiles is chosen based on a global contention window maintained by every mobile station. The contention window is adjusted based on three possible channel states: no transmission, success and collision. Window bounds are functions of current channel load. Simulations have shown that

We show that the performance of WWP is superior as compared to the distributed coordination function of IEEE 802.11 draft standard.

TM13
Ting Yu, ``QMF Filter Bank Design Using Nonlinear Optimization,'' M.Sc. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, May 1997.

This research is about QMF filter bank design using nonlinear optimization methods. Filter bank has applications in areas like video and audio coding, data communication, etc. Its improvement will have important impacts in those areas.

Filter bank design is a multi-objective optimization problem. One important issue in this research is that the design objectives are not unique and are conflicting, leading to designs with different tradeoffs. Here, we define reconstruction error as the objective function, and set stopband energy, passband energy, stopband ripple, passband ripple, and transition bandwidth as constraints. Therefore, filter bank design can be formulated as a single-objective multiple-constraint optimization problem. As both the objective function and constraints are nonlinear, methods for solving nonlinear constrained optimization problems are applied.

Because reconstruction error, stopband energy, and passband energy are expressed as integrations, in order to keep precision and to save computation time, closed-form formulae are derived for both functions and their derivatives.

The difficulty in solving a nonlinear optimization problem is to find the global solution. The Novel method, a new nonlinear optimization package, is applied here. A heuristic function is implemented in Novel, which can lead the search trajectory to explore the whole problem space. Two methods for handling inequality constraints, the Maxq and slack-variable methods, are compared in this research.

Experimental results are presented in the end. Issues like the impacts of constraint formulation, the weight on the objective function, etc. are discussed. We compare the performance of our designs with the best known solutions, and show the improvements of our designs.

TM14
Liwei Wang ``Video on Demand using TCP: An Experimental Study,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana, IL, Dec. 1997

This thesis examines experimentally the feasibility of using TCP for real-time multimedia traffic. We first analyze the characteristics of an MPEG stream and its bandwidth requirement. Then we study the quality of service over TCP in terms of delay, jitter, and throughput. By comparing the real-time multimedia traffic requirements and the QoS provided by TCP, we conclude that TCP is unable to support real-time MPEG traffic without significant quality degradations.

TM15
Zhe Wu ``The Discrete Lagrangian Theory and its Application to Solve Nonlinear Discrete Constrained Optimization Problems,'' M.Sc. Thesis, Dept. of Computer Science, Univ. of Illinois, Urbana, IL, May 1998

In this research we present new results on discrete Lagrangian methods (DLM) and extend our previous (incomplete and highly simplified) theory on the method. Our proposed method forms a strong mathematical foundation for solving general nonlinear discrete optimization problems. Specifically, we show for continuous Lagrangian methods the relationship among local minimal solutions satisfying constraints, solutions found by the first-order necessary and second-order sufficient conditions, and saddle points. Since there is no corresponding definition of gradients in discrete space, we propose a new vector-based definition of gradient, develop first-order conditions similar to those in continuous space, propose a heuristic method to find saddle points, and show the relationship between saddle points and local minimal solutions satisfying constraints. We then show, when all the constraint functions are non-negative, that the set of saddle points is the same as the set of local minimal points satisfying constraints. Our formal results for solving discrete problems are stronger than the continuous counterparts because saddle points in discrete space are equivalent to local minima satisfying constraints, whereas local minimal solutions satisfying constraints in continuous space do not necessarily satisfy the first- and second-order conditions. To verify our method, we apply it to solve discrete benchmark problems, design multiplierless QMF filter banks, and solve some mixed nonlinear integer optimization benchmark problems. We also study methods to speed up convergence while maintaining solution quality. Our experimental results demonstrate that DLM is both effective and efficient.

TM16
Dong Lin, ``Real-Time Voice Transmissions over the Internet,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois, Urbana, IL, Dec. 1998.

This research identifies the problems encountered in transmitting voice over the Internet and proposes approaches to solve these problems. The current Internet is not very suitable for transmitting real-time data because its underlying protocols and switches were only engineered to transmit non-real time data. The problems posed by voice over the Internet are studied by conducting a series of experiments. These problems caused by high loss, large delay, and jitter will seriously affect the transmission quality. A good design thus needs to combine different components that solve these problems together. Silence removal and compression is used to reduce bandwidth usage. Jitter buffers are used to smooth the burstiness in the received stream caused by the network. To conceal loss, we investigate existing methods and propose two new nonredundant reconstruction algorithms based on a simple but effective average reconstruction scheme. One is to apply adaptive filtering on top of average reconstruction in order to explore the signal trend and to obtain better estimations. Effects of important filter parameters, such as filter length and adaptation step size, are studied. Another new method, called transformation-based reconstruction method, is developed to handle low reconstruction quality caused by some rapidly changing parts of signals. Its basic idea is to let the sender transform the input voice stream, based on the reconstruction method used at the receiver, before sending the data packets. Consequently, the receiver is able to recover much better from losses of packets than it would without any knowledge of what the signals should be. This method can improve reconstruction quality without significant computational cost and can be easily extended to different interleaving factors and different interpolation-based reconstruction algorithms.

TM17
Jeffrey P. Monks, ``Optimizing Real-Time Audio Signals over Mobile Networks,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois Urbana, IL, Dec. 1998.

This thesis explores various methods for optimizing real-time audio communications over wireless data networks. Currently, many methods for improving communications deal with optimizing the communications protocol. In this thesis the protocol is not changed, and any protocol is sufficient. The methods investigated in this work deal with optimizing communications through signal processing and data manipulation on top of a well-accepted protocol. The optimizations discussed deal with balancing the signal quality, bandwidth, and robustness.

TM18
Yixin Chen, ``Optimal Anytime Search for Constrained Nonlinear Programming,'' M.Sc. Thesis, Dept. of Computer Science, Univ. of Illinois Urbana, IL, May 2001.

In this thesis, we study optimal anytime stochastic search algorithms (SSAs) for solving general constrained nonlinear programming problems (NLPs) in discrete, continuous and mixed-integer space. The algorithms are general in the sense that they do not assume differentiability or convexity of functions. Based on the search algorithms, we develop the theory of SSAs and propose optimal SSAs with iterative deepening in order to minimize their expected search time. Based on the optimal SSAs, we then develop optimal anytime SSAs that generate improved solutions as more search time is allowed.

Our SSAs for solving general constrained NLPs are based on the theory of discrete constrained optimization using Lagrange multipliers that shows the equivalence between the set of constrained local minima and the set of discrete-neighborhood saddle points (DSP). To implement this theory, we propose a general procedural framework for locating a DSP. By incorporating genetic algorithms in the framework, we evaluate new constrained search algorithms: constrained genetic algorithm (CGA) and combined constrained simulated annealing and genetic algorithm (CSAGA).

All these algorithms are SSAs. One of the most important and difficult issues in using SSAs is the scheduling of SSAs in order to optimize the average search efficiency. Our research shows that SSAs can be scheduled in such a way that minimizes their expected search time. The theory proposes to use iterative deepening to identify the optimal (up to a constant factor) schedules in such a way that minimizes the expected search time when compared to that of the same SSA run under an optimal schedule. We also extend the theory to identify optimal schedules in parallel processing of SSAs, and show that the search of an optimal parallel schedule is NP-complete. We propose to use iterative deepening to find a suboptimal parallel schedule and derive performance bounds between our suboptimal schedule and the optimal parallel schedule. The theory is general enough for SSAs and has been applied to CSA, CGA and CSAGA to develop optimal schedules for these algorithms.

Based on the optimal schedules, we propose optimal anytime SSAs that generate solutions of improved quality as more time is used. We propose new schedules to improve the quality levels in such a way that the total time of solving all quality levels is of the same order of magnitude as that of finding a constrained global optimum.

Finally, we apply our optimal anytime SSAs to solve a collection of engineering application benchmarks. Much better results have been reported in comparison with other existing methods.

TM19
Hong Hai Zhang, ``Improving Constrained Nonlinear Search Algorithms Through Constraint Relaxation,'' M.Sc. Thesis, Dept. of Computer Science, Univ. of Illinois Urbana, IL, Dec. 2001.

In this thesis we study constraint relaxations of various nonlinear programming (NLP) algorithms in order to improve their performance. %for both unnoisy case and noisy case. For both stochastic and deterministic algorithms, we study the relationship between the expected time to find a feasible solution and the constraint relaxation level, build an exponential model based on this relationship, and develop a constraint relaxation schedule in such a way that the total time spent to find a feasible solution for all the relaxation levels is of the same order of magnitude as the time spent for finding a solution of similar quality using the last relaxation level alone.

When the objective and constraint functions are stochastic, we define new criteria of constraint satisfaction and similar constraint relaxation schedules. Similar to the case when functions are deterministic, we build an exponential model between the expected time to find a feasible solution and the associated constraint relaxation level. We develop an anytime constraint relaxation schedule in such a way that the total time spent to solve a problem for all constraint relaxation levels is of the same order of magnitude as the time spent for finding a feasible solution using the last relaxation level alone. Finally, we study the asymptotic behavior of our new algorithms and prove their asymptotic convergence, based on the theory of asymptotic convergence in simulated annealing and constrained simulated annealing.

TM20
Hang Yu, ``Real-Time Transmission of JPEG-2000 Images over the Internet,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois Urbana, IL, May 2004.

This thesis identifies the trade-offs encountered in transmitting images in real-time over the Internet and proposes two approaches to solve them. Image delivery over the Internet can use either the TCP or the UDP protocol. Delivery by TCP causes no image quality degradations, but has long delays when there are packet losses. On the other hand, although UDP has shorter delays, its delivery is not preferred because it does not recover lost packets during transmission, thus causing serious quality degradations.

In this thesis, we address the quality-delay trade-offs by using multiple description coding (MDC) and unequal error protection (UEP), together with the UDP protocol, to deliver JPEG 2000 images over the Internet. We first analyze Internet traffic data and determine a proper interleaving factor by which UDP packets are interleaved in order for losses to be below a certain threshold. We then propose an MDC scheme with a new reconstruction strategy carried out in the frequency domain, thereby avoiding sample-domain segmentation, which is a major cause for performance degradations in previous sample-domain MDC schemes. An optimized reconstruction-based transform for the new scheme is also derived and incorporated into our proposed MDC scheme. We examine the performance of our MDC algorithm under controlled-loss scenarios and real Internet trace-driven simulations, both of which perform quite well. We also study the quality-delay behavior of our proposed scheme under UDP and compare it with that of the traditional TCP delivery. The results show that this new MDC scheme can be an attractive alternative to the traditional approach.

Last, we study UEP schemes that pack code-blocks and FEC codes together in both serial and parallel styles. Based on this study, we propose a hybrid packing algorithm that applies serial or parallel packing for different parts of a code stream. For UEP schemes, we test our proposed hybrid packing algorithm under several measures, namely, PSNR, undecodable probability, and sensitivity. The results show that our proposed hybrid packing scheme is favorable as compared to that of TCP.

TM21
Zixia Huang, ``The Design of A Multi-party VoIP Conferencing System,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois Urbana, IL, May 2009.

This thesis identifies the problems and trade-offs of a multi-party VoIP conferencing system implemented over the Internet and proposes approaches to solve these problems. The current Internet is unreliable, and it degrades the conversational quality of real-time multi-party conferencing. Delay disparities may cause unbalanced silence periods, and losses and jitters may affect the intelligibility of speech segments received. We collect real Internet traces from the PlanetLab and classify them into different categories according to the traffic behavior. After studying the conversational dynamics in the multi-party system, we identify user-observable metrics that affect the perception of conversational quality and study their trade-offs. Based on the dynamics and the Internet traces, we design the transmission topology to reduce delay variations and to avoid links with high losses and jitters. We propose loss concealment schemes for reducing the packet drop rate and play-out scheduling algorithms for equalizing silence periods and smooth jitters. We also discuss issues and solutions in a practical multi-party VoIP system design. We compare the performance of our system and that of Skype (Version 3.5.0.214) using repeatable experiments that simulate human participants and network conditions in a multi-party conferencing scenario. Our limited, subjective tests show that we can improve the perceptual quality when network connections are lossy and have large delay disparities. Because it is impossible to conduct subjective tests under all possible conditions, we have developed a classifier that learns to select the best equalization algorithm using learning examples derived from subjective tests under limited network and conversational conditions. Experimental results show that our classifier can consistently pick the best algorithm with the highest subjective conversational quality under unseen conditions.

TM22
Wee-Hong Yeo, ``Finding Perceptually Optimal Operating Points of a Real Time Interactive Video-Conferencing System,'' M.Sc. Thesis, Dept. of Electrical and Computer Engineering, Univ. of Illinois Urbana, IL, May 2011.

This research aims to address issues faced by real time video-conferencing systems in locating a perceptually optimal operating point under various network and conversational conditions. In order to determine the perceptually optimal operating point of a video-conferencing system, we must first be able to conduct a fair assessment of the quality of the current operating point in the system and compare it with another operating point to determine if one is better than the other in terms of perceptual quality. However at this point in time, there does not exist one objective quality metric that can accurately and fully describe the perceptual quality of a real time video conversation. Hence there is a need for a controlled environment to allow tests to be conducted in and in which we can study different metrics and identify the best trade-offs between them.

We begin by studying the components of a typical setup of a real time video-conferencing system and the impacts that various network and conversation conditions can have on the overall perceptual quality. We also look into different metrics available to measure those impacts. We then created a platform to perform black box testing on current video-conferencing systems and observe how they handle the changes in operating conditions. The platform is then used to conduct a brief evaluation of the performance of Skype, a popular commercial video conferencing system. However, we are not able to modify the system parameters of Skype.

The main contribution of this thesis is the design of a new testbed that provides a controlled environment to allow tests to be conducted to determine the perceptual optimum operating point of a video conversation under specified network and conversational conditions. This testbed will allow us to modify certain parameters, such as frame rate and frame size, which were not previously possible. The testbed takes as input, two recorded videos of the two speakers of a face-to-face conversation and desired output video parameters, such as frame rate, frame size and delay. A video generation algorithm is designed as part of the testbed to handle modifications to frame rate and frame size of the videos as well as delays inserted into the recorded video conversation to simulate the effects of network delays. The most important issue addressed is the generation of new frames to fill up the gaps created due to a change in frame rate or delay inserted, unlike as in the case of voice, where a period of silence can simply be used to handle these situations. The testbed uses a packetization strategy designed on the basis of an uneven packet transmission rate (UPTR) and that handles the packetization of interleaved video and audio data; it also uses piggybacking to provide redundancy if required. Losses can be injected either randomly or based on packet traces collected via PlanetLab. The processed videos will then be pieced together side-by-side to give the viewpoint of a third-party observing the video conversation from the site of the first speaker. Hence the first speaker will be observed to have a faster reaction time without network delays than that of the second speaker who is simulated to be located at the remote end. The video of the second speaker will also reflect the degradations in perceptual quality induced by the network conditions, whereas the first speaker will be of perfect quality. Hence with the testbed, we are able to generate output videos for different operating points under the same network and conversational conditions and thus able to make comparisons between two operating points. With the testbed in place, we demonstrate how it can be used to evaluate the effects of various parameters on the overall perceptual quality. Lastly, we demonstrate the results of applying an existing efficient search algorithm used for estimating the perceptually optimal mouth-to-ear delay (MED) of a Voice-over-IP(VoIP) conversation to a video conversation. This is achieved by using the testbed designed to conduct a series of subjective and objective tests to identify the perceptually optimal MED under specific network and conversational conditions.

TM23
Lili Liu ``Pattern-Based Method for Reducing Drawdowns in Stock Index Investment,'' M.Phil. Thesis, Dept. of Computer Science and Engineering, Chinese Univ. of Hong Kong, Hong Kong, May 2015.

Drawdown is the percentage loss of wealth from the previous peak. It is an important downside-risk metric to measure the performance of portfolio investment. Constructing a portfolio with small drawdowns and high returns is the consistent goal of all investors. However, great drawdowns are not rare in financial history, and they happened on stock indices. To the best of our knowledge, there are no effective and practical drawdown-reduction approaches.

In this thesis, we propose a novel pattern-based drawdown-reduction model to reduce future drawdowns of stock index investment. It firstly extracts two kinds of relevant features, i.e., ddFeature and priceFeature, from historical stock index price series, then trains a prediction model based on these pattern-and-drawdown pairs in history, and finally makes investment decisions according to a binary investment function. The effects of parameters involved with feature extraction and investment decision are well studied, and an adaptive investment method is also proposed. Experimental results on three different datasets, S&P 500, NASDAQ Composite and HSI, show that the proposed method achieves both small drawdowns as well as high cumulative return.

TM24
Feihu Zhang ``Learning New Features for Effective Dense Matching with Supplementary Meta-Level Deep Neural Networks,'' M.Phil. Thesis, Dept. of Computer Science and Engineering, Chinese Univ. of Hong Kong, Hong Kong, July 2017.

In dense Matching (including stereo matching and optical flow), nearly all existing approaches are based on simple features, such as gray or RGB color and gradient or simple transformations like census, to calculate matching costs. These features do not perform well in complex scenes that may involve radiometric changes, noises, overexposure and/or textureless regions. Various problems may appear, such as wrong matching at the pixel or region level, flattening/ breaking of edges and/or even entire structural collapse.

In this thesis, we first propose two fundamental principles based on the consistency and distinctiveness of features to identify the real good feature for dense matching and show that almost all existing problems in dense matching are caused by features that violate one or both of these principles. Then, to systematically learn good features for dense matching, we formulate these two principles as a general multi-objective optimization for the learning system and studied the trade-off between the two principles. Finally, to overcome the problem that traditional learning systems (e.g. convolutional neural networks) only tries to find a model with fixed parameters that optimize the average behavior over all inputs, without considering dataspecific properties (e.g. variances in texture or illuminance conditions), we proposed a new meta-learning method which could learn and assimilate continuous meta-knowledge into the base model to make the weights/parameters adaptive to different inputs without losing generalization capability. The proposed meta-level learning strategy is applied to deep neural networks (DNNs) which finally works as the feature extractor to find the effective features for dense matching.

By using two-frame optical flow and stereo matching as applications, we verify the effectiveness of the proposed feature principles and objective model. Our experimental results show that the features learned can significantly improve the performance of state-of-the-art dense matching algorithms and based on the KITTI benchmarks, our method ranks first on the two stereo benchmarks by the end of 2016 and is the best among existing two-frame optical-flow algorithms on flow benchmarks. Besides, by comparing with the state-of-the-art DNN architectures (e.g. inception net, residual net etc.), the feature extractors implemented by our MLNN achieves lower error rates in both stereo matching and optical flow.

PAT2
B. W. Wah and D. Lin, ``Method and Program Product for Organizing Data into Packets,'' US Patent Number 6754203 B2, Filed: March 1, 2002, Granted: June 22, 2004
A method for communicating data over a packet switched network comprises dividing data into a plurality of frames, with each frame described by at least a first and a second parameter. The second parameter has a high correlation. The first parameter is placed in a first and a second description, while the second parameter is interleaved to the first and second descriptions. The first and second descriptions are packetized and communicated over the network. Upon reception, the first parameters for a frame sequence are extracted from one of the packets, while the interleaved second parameters are extracted from both the packets. If a packet is lost, the missing first parameter may be obtained from another packet, while the missing second parameter may be reconstructed using a second parameter from the other packet.

R08
B. W. Wah, ``MIDA*: An IDA* Search with Dynamic Control,'' Research Report CRHC-91-09, Center for Reliable and High Performance Computing, Coordinated Science Laboratory, Univ. of Illinois, Urbana, IL, April 1991.

In this paper, we extend Korf's IDA* search method so it can dynamically determine the threshold in each iteration of the search based on run-time information collected. Current IDA* search strategies are based on the assumption that the distribution of nodes in the search tree by their lower-bound values is exponential, so a static strategy that extends the thresholds by a constant amount in each iteration will result in an exponential increase in the number of nodes expanded in successive iterations. We show that this distribution is not necessary exponential for all problems, and a dynamic strategy that determines the thresholds, based on run-time information collected, will result in lower search overhead. We propose an objective function for evaluating IDA* searches based on the ratio of the number of nodes evaluated by an IDA* search and that evaluated by a pure best-first search. We analyze the performance of IDA* searches for both continuous and discrete search problems and establish the optimal performance of an IDA* search. Finally, we develop strategies for determining thresholds at run time and show performance improvements for the 15-puzzle problems, the traveling-salesman problems, and the knapsack problems.

R13
Kumar Ganapathy and Benjamin W. Wah, ``Optimal Synthesis of Algorithm-specific Lower-Dimensional Processor Arrays,'' Tech. Rep. CRHC 93-23, Center for Reliable and High Performance Computing, Coordinated Science Laboratory, Univeristy of Illinois, Urbana, IL, November 1993.

Processor arrays are frequently used to deliver high-performance in many applications with computationally intensive operations. This paper presents the General Parameter Method (GPM), a systematic parameter-based approach for synthesizing such algorithm-specific architectures. GPM can synthesize processor arrays of any lower dimension from a uniform-recurrence description of the algorithm. The design objective is a general non-linear and non-monotonic user-specified function, and depends on attributes such as computation time of the recurrence on the processor array, completion time, load time, and drain time. In addition, bounds on some or all of these attributes can be specified. GPM performs an efficient search of polynomial complexity to find the optimal design satisfying the user-specified design constraints. As an illustration, we show how GPM can be used to find linear processor arrays for the problem of finding transitive closure. We consider design objectives that minimize computation time, or processor count, or completion time (including load and drain times), and user-specified constraints on number of processing elements and/or computation/completion times. We show that GPM can be used to obtain optimal designs that trade between number of processing elements and completion time, thereby allowing the designer to choose a design that best meets the specified design objectives. We also show the equivalence between the model assumed in GPM and that in the popular dependence-based methods [Kuhn 1980, Moldovan 1982]. Consequently, GPM can be used to find optimal designs for both models.

R14
Arthur Ieumwananonthachai and Benjamin W. Wah, ``Teacher -- An Automated System for Learning Knowledge-Lean Heuristics,'' Tech. Rep. CRHC 95-08, Center for Reliable and High Performance Computing, Coordinated Science Laboratory, University of Illinois, Urbana, IL, March 1995.

In this report, we present Teacher, a system for learning knowledge-lean heuristics under resource constraints. Teacher is an implementation of a genetics-based learning framework we have developed for improving the performance of heuristics in application problem solvers. Besides providing a flexible and modular framework for conducting experiments, Teacher provides (a) a test-bed for experimenting with various resource scheduling, generalization, and heuristics-generation strategies, (b) an automated learning system that can be easily interfaced to new applications and can be customized based on user requirements and target environments. This report describes the application-independent (AppI) functions provided by Teacher, and the application-dependent (AppD) functions for interfacing to new problem solvers. By adjusting various global parameters in Teacher, users can control the numerous options and alternatives in Teacher. To illustrate our design, we use CRIS, a genetics-based test-pattern generation system, as a running example throughout the report.

O01
Lon-Chan Chu, ``ISE -- AN INTEGRATED SEARCH ENVIRONMENT: THE MANUAL,'' Tech. Rep. CRHC 92-01, Center for Reliable and High Performance Computing, Coordinated Science Laboratory, University of Illinois, Urbana, IL, January 1992.

In this manual, we describe the software package ISE (acronym for Integrated Search Environment), a tool that implements hierarchical searches with metacontrol. ISE actually is a collection of problem-independent routines to support search processes. Mainly, these routines are core routines for solving a search problem and they handle the control of searches and maintain the statistics related to searches. By separating the problem-dependent and problem-independent parts in ISE, new search methods can be implemented by calling existing methods and they can be developed easily by coding the meta- control. Further, new applications can be developed by only coding the problem- dependent parts. Potential users of ISE would be designers of new application solvers and new search algorithms, and users of experimenting them. ISE is designed to be user-friendly and information rich. In this manual, the organization of ISE is described and some sample runs are also shown.