This article reports the findings of the Workshop on High Performance Computing and Communications (HPCC) for Grand Challenge Applications: Computer Vision, Speech and Natural Language Processing (SNLP), and Artificial Intelligence (AI). The goals of the workshop are to identify applications, research problems, and designs of HPCC systems for supporting applications in these areas.
In computer vision, we have identified the main scientific issues as machine learning, surface reconstruction, inverse optics and integration, model acquisition, and perception and action. Since vision algorithms operate in different levels of granularity, computers for supporting these algorithms need to be heterogeneous and modular. Advances in technology, new architectural concepts, and software design methods are essential for this area.
In SNLP, we have identified issues in statistical analysis in corpus-based speech and language understanding, search strategies for language analysis, auditory and vocal-tract modeling, integration of multiple levels of speech and language analyses, and connectionist systems. Similar to algorithms in computer vision, algorithms in SNLP require high computational power, ranging from general purpose supercomputing to special pur- pose VLSI systems. As processing has various requirements, a hybrid heterogeneous computer system is the most desirable.
In AI, important issues that need immediate attention include the development of efficient machine learning and heuristic search methods that can adapt to different architectural configurations, and the design and construction of scalable and verifiable knowledge bases, active memories, and artificial neural networks. A computer system for supporting AI applications is heterogeneous, requiring research in high speed computer networks, mass storage and efficient retrieval methods, computer languages, and hardware and software design tools.
Research in these areas is inherently multidisciplinary and will require active participation of researchers in device and networking technologies, signal processing, computer architecture, software engineering, and knowledge engineering. Besides extending current frontiers in research, an important aspect to be emphasized is the integration of existing components and results into working systems.
In this paper, we have studied various supervised learning methods for training feed-forward neural networks. In general, such learning can be considered as a nonlinear global optimization problem in which the goal is to minimize a nonlinear error function that spans the space of weights, using heuristic strategies that look for global optima (in contrast to local optima). We have surveyed various global optimization methods suitable for neural-network learning, and have proposed the Novel method (Nonlinear global Optimization method Via External Lead) for nonlinear optimization and neural network learning. By combining global and local searches, we show how Novel can be used to find a good local minimum in the error space. Our key idea is to use a user-defined trace that pulls a search out of a local minimum without having to restart it from a new starting point. Using five benchmark problems, we have compared Novel against some of the best global optimization algorithms and have demonstrated its superior improvement in performance.
The key points studied and summarized in this paper are as follows: (a) Global optimization involves strategies to escape from local minima after reaching there, (b) A good global search strategyy can find better local minima, (c) The search space of small neural networks is more rugged, requiring more powerful global search methods to get convergence, (d) A better local minimum in the error space does not always lead to neural networks that generalize better, (e) Smaller networks tend to generalize better.
Sun Sparc object code training software and results
In this paper, we propose the concept of regression cubes in which regression models can be obtained for any data cuboid efficiently without accessing the raw data. The traditional classification of measures into distributive, algebraic, and holistic measures cannot accommodate advanced statistical models such as regression and filters. In this paper, we propose a fundamentally new class of measures, compressible measures, in order to support efficient computation of the statistical models. For compressible measures, we can efficiently compute the statistical models at an arbitrary data cell, while each data cell is compressed into a small auxiliary matrix with a fixed size independent of the number of tuples. We can then compute the statistical measures for any data cell from the compressed data of the lower-level cells without accessing the raw data. Time- and space-efficient lossless aggregation formulae are derived for regression and filtering measures. Our analytical and experimental studies show that the proposed approach substantially reduces the memory usage and the overall response time for statistical analysis of multidimensional data.
In this paper, we investigate methods for online, multi-dimensional regression analysis of time-series stream data, with the following contributions: (1) our analysis shows that only a small number of compressed regression measures instead of the complete stream of data need to be registered for multi-dimensional linear regression analysis, (2) to facilitate online stream data analysis, a partially materialized data cube model, with regression as measure, and a tilt time frame as its time dimension, is proposed to minimize the amount of data to be retained in memory or stored on disks, and (3) an exception-guided drilling approach is developed for on-line, multi-dimensional exception-based regression analysis. Based on this design, algorithms are proposed for efficient analysis of time-series data streams. Our performance study compares the proposed algorithms and identifies the most memory- and time-efficient one for stream data analysis.
Real-time surveillance systems and other dynamic envoronments often generate tremendous (potentially infinite) volume of stream data: the volume is too huge to be scanned multiple times. However, much of such data resides at rather low level of abstraction, whereas most analysts are interested in dynamic changes (such as trends and outliers) at relatively high levels of abstraction. To discover such high level characteristics, one may need to perform online multilevel analysis of stream data, similar to OLAP (online analytical processing) of relational and data warehouse data.
With limited storage space and the demand for fast response, is it realistic to promote online, multi-dimensional analysis and mining of stream data to alert people about dramatic changes of situations at multiple levels of abstraction?
In this paper, we present an architecture, called stream-cube, which, based on our analysis, is feasible for successful online multi-dimensional, multi-level analysis of stream data. By successfu, we mean that the system will provide analytical power and flexibility, derive timely and quality responses, and consume limited memory space and other resources.
The general design of the stream-cube architecture is described as follows. First, a tilt time frame model is taken as the default model for time dimension. Such a model reduces the amount of data to be retained in memory or stored on disks but still achieves flexibility and analysis power. Second, a small number of critical layers are maintained for flexible analysis. Consider that the stream data resides at the primitive layer. It is desirable to identify two critical higher layers in applications: the minimal interest layer, and the observation layer. These two layers can be used as the basis for stream cube construction and for multi-level online analysis. Third, instead of computing a complete data cube based on the notion of critical layers, one may compute only the layers along a popular path and leave others for query-driven, online computation. Our analysis shows that this architecture provides a good time-space trade-off: it decreases susbstantially the demand for memory usage and reduces the overall response time. Other possible designs for OLAPing stream data are also discussed in the paper.
This paper presents a novel mechanism for scheduling packet transmissions of real-time video conferencing over the public Internet. By scheduling transmissions of video packets at different priorities in the presence of packet losses and delays, we aim to deliver videos with perceptually pleasing and consistent quality at receivers. Fundamental to this new mechanism is a realization of uneven packet transmission rates (UPTR) in the Internet, which allow an instantaneous packet transmission rate to fluctuate around an average rate, without significantly affecting the average loss rate and delay in transmissions. By exploiting this generic UPTR idea, we propose two application scenarios that provide positive design alternatives for robust realtime video delivery over the lossy Internet. Experimental results clearly show the advantages of our proposed methods over those previous solutions without awareness of UPTR.
We have proposed a statistical method for comparing the performance of HMs. Instead of using sample averages alone, which can be erroneous when sample standard deviations are large, we evaluate a measure called probability of win that depends on both the sample mean and the sample standard deviation. It measures the probability that the population mean performance of one HM is better than the population mean of another. It can also be considered as a normalized performance measure as it values is between 0 and 1. We have shown anomalies in the ordering of HMs for some normalization methods when the baseline for normalization is changed, and two normalization methods that do not have anomalies. The best choice of a baseline HM is one whose performance is invariant when new HMs are generated in learning. One such HM is one whose performance on a test case is the median performance of all HMs evaluated on this test case.
In evaluating performance of an HM across subdomains, it is difficult to find an aggregate statistical measure as performance data across subdomains can be dependent. Moreover, we do not know the weight that should be placed on each subdomain in evaluating the aggregate measure. For this reason, we have proposed to place a common performance constraint across all subdomains and accept an HM if its performance satisfies the constraint across all the subdomains. Finally, we have described the architecture of Teacher, our prototype learning system, and experimental results in learning new HMs for TimberWolf. Our results show that automated learning and generalization will lead to improved HMs.
The first part of this thesis addresses the problem of designing algorithm-specific processor arrays. A systematic parameter-based method, called the General Parameter Method (GPM), to design optimal, lower-dimensional processor arrays for uniform dependence algorithms has been developed. The GPM can be used to derive optimal arrays for any user-specified objective expressed in terms of the parameters. The proposed approach employs an efficient search technique to explore the design space and arrive at the optimal designs. Equivalence between the parameter and dependence-based methods can be used to find optimal designs in the dependence-based approaches. The GPM has also been extended to derive optimal two-level pipelined algorithm-specific processor arrays. Such two-level pipelined arrays can be clocked at higher rates than can nonpipelined designs for real-time applications.
The second part of this thesis presents a parallel VLSI architecture for a general-purpose coprocessor for uniform dependence algorithms. The architecture consists of a linear array of processors and a linear chain of buffer memories organized as FIFO queues to store the buffered data. Such an architecture is advantageous from the point of view of scalability and wafer-level integration. A distinguishing feature is the assumption of a limited-bandwidth interface to external memory modules for accessing the data. Such an assumption allows the coprocessor to be integrated easily into existing systems. Efficient techniques to partition the dependence graph into blocks, sequence the blocks through the buffer memory to reduce the number of data accesses to main memory, and map the blocks using GPM have been developed. An important result obtained is the square-root relationship between clock-rate reduction and area of the coprocessor under fixed main-memory bandwidth. From the square-root relationship, it can found that the system yield improves with the area of the coprocessor when chip yield decreases as the inverse square of the clock frequency. Results on matrix-product and transitive-closure applications indicate that the coprocessor can be used to deliver higher speedup or lower clock rate than a reference one-processor design. Thus, the coprocessor can be used as a general-purpose back-end accelerator for loop-based matrix algorithms.
In this thesis we present new methods for the automated design of new heuristics in knowledge-lean applications and for finding heuristics that can be generalized to unlearned test cases. These applications lack domain knowledge for credit assignment; hence, operators for composing new heuristics are generally model free, domain independent, and syntactic in nature. The operators we have used are genetics based; examples of which include mutation and crossover. Learning is based on a generate-and-test paradigm that maintains a pool of competing heuristics, tests them to a limited extent, creates new ones from those that perform well in the past, and prunes poor ones from the pool. We have studied four important issues in learning better heuristics: (a) partitioning of a problem domain into smaller subsets, called subdomains, so that performance values within each subdomain can be evaluated statistically, (b) anomalies in performance evaluation within a subdomain, (c) rational scheduling of limited computational resources in testing candidate heuristics in single-objective as well as multi-objective learning, and (d) finding heuristics that can be generalized to unlearned subdomains.
We show experimental results in learning better heuristics for (a) process placement for distributed-memory multicomputers, (b) node decomposition in a branch-and-bound search, (c) generation of test patterns in VLSI circuit testing, (d) VLSI cell placement and routing, and (e) blind equalization.
We show experimental results in applying Novel to solve nonlinear optimization problems, including (a) the learning of feedforward neural networks, (b) the design of quadrature-mirror-filter digital filter banks, (c) the satisfiability problem, (d) the maximum satisfiability problem, and (e) the design of multiplierless quadrature-mirror-filter digital filter banks. Our method achieves better solutions than existing methods, or achieves solutions of the same quality but at a lower cost.
In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM_{dn}) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM_{dn}) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM_{dn}, leading to the first-order necessary and sufficient condition for CLM_{dn}.
To find a CGM_{dn}, CSA searches for a discrete saddle point with the minimum objective value by carrying out both probabilistic descents in the original-variable space of a discrete augmented Lagrangian function and probabilistic ascents in the Lagrange-multiplier space. We prove that CSA converges asymptotically to a CGM_{dn} with probability one. We also extend CSA to solve continuous and mixed-integer constrained NLPs. By achieving asymptotic convergence, CSA represents one of the major developments in nonlinear constrained global optimization today, which complements simulated annealing (SA) in unconstrained global optimization.
Based on CSA, we have studied various strategies of CSA and their trade-offs for solving continuous, discrete, and mixed-integer NLPs. The strategies evaluated include adaptive neighborhoods, distributions to control sampling, acceptance probabilities, and cooling schedules. An optimization software package based on CSA and its various strategies has been implemented.
Finally, we apply CSA to solve a collection of engineering application benchmarks and design filters for subband image coding. Much better results have been reported in comparison with other existing methods.
In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated first-order search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixed-integer space. The constrained problems are general in the sense that they do not assume the differentiability or convexity of functions. Our proposed theory and methods are targeted at discrete problems and can be extended to continuous and mixed-integer problems by coding continuous variables using a floating-point representation (discretization). We have characterized the errors incurred due to such discretization and have proved that there exists upper bounds on the errors. Hence, continuous and mixed-integer constrained problems, as well as discrete ones, can be handled by DLM in a unified way with bounded errors.
Starting from new definitions on discrete neighborhoods, constrained local minima in discrete space, and new generalized augmented Lagrangian function, we have developed new discrete-space first-order necessary and sufficient conditions that are able to characterize all constrained local minima in discrete space. Our proposed first-order conditions show a one-to-one correspondence between a discrete-space constrained local minimum, a discrete-space saddle point, and a solution to the first-order conditions. They are important because they allow us to transform the difficult problem of looking for constrained local minima in the original-variable space to the easier problem of looking for saddle points in the discrete Lagrangian space. They provide a solid foundation for DLM to solve general constrained problems that cannot be achieved in the conventional theory of Lagrange-multipliers for solving continuous constrained nonlinear programming problems.
Finally, we demonstrate the efficiency and effectiveness of our proposed theory and methods. DLM is able to solve systematically general discrete, continuous and mixed-integer constrained benchmarks, which is a task not achieved by previous methods. DLM has found better multiplierless filter-bank designs that improve over all of Johnston's benchmark designs using a maximum of three to six ONE bits in each filter coefficient instead of using floating-point representations. Finally, DLM has found efficiently new solutions for satisfiability problems that were not possible by existing local- and global search techniques.
Coding and transmission of images and videos have been a popular but very challenging topic. Traditional coding algorithms are usually designed to optimize compression ratio in an error-free environment but not for the current Internet that is only a best-effort, packet-switched and unreliable network. This conflict presents a number of challenges for high-quality image and video transmissions.
Information loss and bandwidth limitation are two major factors that affect the quality of video streaming. In this thesis, we have developed various error concealment and reconstruction-based rate control schemes to address these two issues. First, we have proposed a sender-receiver based approach for designing a multiple-description video coder that facilitates recovery of packet losses. Second, we have studied artificial neural network-based reconstruction algorithms for compensating nonlinear compression losses due to multiple-description coding. Third, we have employed syntax-based packetization and decoder-side feedback for reducing propagation losses. Last, we have incorporated the reconstruction process in the design and evaluation of rate control schemes.
Likewise, delay and packet loss are two primary concerns for image transmissions. Existing approaches for delivering single-description coded images by TCP give superior quality but very long delays when the network is unreliable. To reduce delays, we have proposed to use UDP to deliver sender-receiver based multiple-description coded images. To further improve the trade-offs between delay and quality of either TCP delivery or UDP delivery, we have investigated a combined TCP/UDP delivery that can give shorter delay than pure TCP delivery and better quality than pure UDP delivery.
In this thesis, we propose a recurrent FIR neural network, develop a constrained formulation for neural network learning, study an efficient violation guided backpropagation algorithm for solving the constrained formulation based on the theory of extended saddle points, and apply neural network learning for predicting both noise-free time series and noisy time series. The recurrent FIR neural-network architecture combines a recurrent structure and a memory-based FIR structure in order to provide a more powerful modeling ability. The constrained formulation for neural network learning incorporates the error of each learning pattern as a constraint, a new cross-validation scheme that allows multiple validations sets to be considered in learning, and new constraints that can be expressed in a procedure form. The violation-guided back propagation algorithm first transforms the constrained formulation into an $l_1$-penalty function, and searches for a saddle point of the penalty function.
When using a constrained formulation along with violation guided backpropagation to neural network learning for near noiseless time-series benchmarks, we achieve much improved prediction performance as compared to that of previous work, while using less parameters. For noisy time-series, such as financial time series, we have studied systematically trade-offs between denoising and information preservation, and have proposed three preprocessing techniques for time-series with high-frequency noise. In particular, we have proposed a novel approach by first decomposing a noisy time series into different frequency channels and by preprocessing each channel adaptively according to its level of noise. We incorporate constraints on predicting low-pass data in the lag period when a low-pass filter is employed to denoise the band. The new constraints enable active training in the lag period that greatly improves the prediction accuracy in the lag period. Extensive prediction experiments on financial time series have been conducted to exploit the modeling ability of neural networks, and promising results have been obtained.
In this dissertation, we propose a general approach that can significantly reduce the complexity in solving discrete, continuous, and mixed constrained nonlinear optimization (NLP) problems. A key observation we have made is that most application-based NLPs have structured arrangements of constraints. For example, constraints in AI planning are often localized into coherent groups based on their corresponding subgoals. In engineering design problems, such as the design of a power plant, most constraints exhibit a spatial structure based on the layout of the physical components. In optimal control applications, constraints are localized by stages or time.
We have developed techniques to exploit these constraint structures by partitioning the constraints into subproblems related by global constraints. Constraint partitioning leads to much relaxed subproblems that are significantly easier to solve. However, there exist global constraints relating multiple subproblems that must be resolved. Previous methods cannot exploit such structures using constraint partitioning because they cannot resolve inconsistent global constraints efficiently.
We have developed a mathematical theory that provides strong necessary and sufficient analytical conditions for limiting the subspace to be searched when resolving the global constraints. We have proposed the theory of extended saddle points (ESP) to support constraint partitioning when solving NLPs. Based on a novel penalty formulation, ESP offers a necessary and sufficient condition for constrained local optima of NLPs in discrete, continuous, and mixed spaces. It facilitates constraint partitioning by providing a set of necessary conditions, one for each subproblem, to characterize the local optima. It further reduces the complexity by defining a much smaller search space in each subproblem for backtracking. Since resolving the global constraints only incurs a small amount of overhead, our approach leads to a significant reduction of complexity.
Our partition-and-resolve approach has achieved substantial improvements over existing methods in solving AI planning and mathematical programming problems. In this dissertation, we present SGPlan, a planner based on constraint partitioning that has significantly improved the solution time and quality on many application domains when compared to other leading planners. We also describe our implementation that has successfully incorporated ESP with ASPEN, a planning system for spacecraft exploration developed by NASA. The ESP planner performs 100 to 1000 times faster than the original ASPEN on NASA's benchmarks and can generate solutions of much higher quality.
Constraint partitioning has led to a major breakthrough in solving mathematical programming problems in operations research and engineering applications. In this dissertation, we have applied our method to solve some large-scale continuous and mixed-integer NLPs in standard benchmarks. We have solved some large-scale problems that were not solvable by other leading optimization packages and have improved the solution quality on many problems.
This research addresses real-time multimedia communication systems that can achieve high perceptual quality for their users. It focuses on the fundamental understanding of multiple quality aspects perceived and their trade-offs in the design of run-time control schemes that adapt to changing network conditions and expectations of the users of a system.
One of the main contributions of this thesis is the use of adaptively scheduled off-line subjective comparison tests to efficiently and accurately learn users' subjective preferences among the alternative trade-offs in order to guide the run-time control schemes to achieve high perceptual quality.
In order to illustrate the application of the general framework and our methodology, throughout this thesis we study the design of real-time VoIP (voice-over-IP) systems that can achieve high perceptual conversational quality. The trade-offs in the design and operation of a VoIP system involve the design of speech codecs and strategies for network control, playout scheduling, and loss concealment.
The perceptual quality of a conversation over a network connection depends on the quality of the one-way speech (listening-only speech quality or LOSQ) received and the delay incurred from the mouth of the speaker to the ear of the listener (mouth to ear delay or MED). In a conversation, each participant takes turns in speaking and listening, and both perceive a silence duration called {mutual silence} (MS) when the conversation switches from one party to another. When the connection has delays, the MSs perceived by a participant consist of alternating short and long silence periods between turns. As a result, listeners may experience lower perceptual quality when the MSs are highly asymmetric, some speakers appearing to be more distant than others, or some responding slower than others.
The evaluation of conversational quality is a largely unexplored area. There are many objective metrics for assessing the quality of a VoIP conversation, but there is no single objective metric whose results match well with subjective results. Indiscriminate subjective testing is not feasible because it is prohibitively expensive to carry out many such tests under various conversational and network conditions. Moreover, there is no systematic method to generalize the subjective test results to unseen conditions. To this end, there are five key innovations in this research that will address the limitations of existing work and improve the perceptual quality of VoIP systems.
Firstly, we have developed a methodology and test-bed to objectively and subjectively evaluate VoIP systems under repeatable and fair conditions using a black-box approach. %New We have applied this methodology and test-bed on four commonly used VoIP systems: Skype, Google-Talk, Windows Live and Yahoo Messenger. The results show that different systems operate differently in coping with network imperfections such as jitter and loss. We also observe that systems do not change their operation as a function of the one-way delay of the connection, or of the turn-taking frequency of the conversation. The results show that Windows Live is preferred under a significant set of conditions, but none of the systems is dominant under all conditions.
We have also learned the mapping between easily obtainable objective measures and subjective preferences of users in order to allow any VoIP system to be comprehensively compared against others without expensive subjective comparisons. %New We later use the mapping learned to subjectively compare our newly designed system with the four systems already evaluated.
Secondly, we have developed a general model of pair-wise subjective comparisons, based on individually identified properties, axioms and lemmas, that models comparisons on a continuous operating curve with a single control parameter. The model provides a basis for developing the method to schedule adaptive off-line subjective tests and for identifying the optimal point by fusing the information obtained from separate subjective evaluations on the same operating curve. The model can be used for formulating and solving any type of pair-wise comparison problem that exhibits the same properties identified. The model is flexible to allow the existence of multiple optimal points on an operating curve and includes a belief function framework that can guide the search for optimal points efficiently. Furthermore, the model is built on a statistical framework that allows for the confidence of individual evaluation results to be represented in the conclusiveness of the combined belief function.
Thirdly, we have developed a method for tackling the control design problem of finding the optimal point in an
N-dimensional space, which includes all the metrics that affect quality.
The overall problem is transformed into two independent problems of finding the optimal point on a continuous but one-dimensional curve, and
learning the mapping on a set of curves that adequately spans the K-dimensional ($K
Lastly, we have generalized the learned results in the design of play-out scheduling control to conditions that are unseen at design time and observed only at run-time. Along with design choices in other components of the architecture, this results in a VoIP system that can achieve higher and more consistent perceptual quality than other four VoIP systems analyzed. We have also shown that our system performs very close to a non-causal ideal system where the POS decisions are made optimally using future information.
Our model and methodologies are applicable to a wide variety of real-time multimedia communication system-control design problems which operate under constrained resources, communicating over non-stationary IP networks, and for which the overall quality perceived by users of the system has multiple counteracting aspects which cannot be represented by a single objective metric.
An efficient, scalable and fair protocol -- Wireless Window Protocol (WWP) -- is proposed. WWP is a limited contention protocol in which the set of contending mobiles is chosen based on a global contention window maintained by every mobile station. The contention window is adjusted based on three possible channel states: no transmission, success and collision. Window bounds are functions of current channel load. Simulations have shown that
We show that the performance of WWP is superior as compared to the distributed coordination function of IEEE 802.11 draft standard.
Filter bank design is a multi-objective optimization problem. One important issue in this research is that the design objectives are not unique and are conflicting, leading to designs with different tradeoffs. Here, we define reconstruction error as the objective function, and set stopband energy, passband energy, stopband ripple, passband ripple, and transition bandwidth as constraints. Therefore, filter bank design can be formulated as a single-objective multiple-constraint optimization problem. As both the objective function and constraints are nonlinear, methods for solving nonlinear constrained optimization problems are applied.
Because reconstruction error, stopband energy, and passband energy are expressed as integrations, in order to keep precision and to save computation time, closed-form formulae are derived for both functions and their derivatives.
The difficulty in solving a nonlinear optimization problem is to find the global solution. The Novel method, a new nonlinear optimization package, is applied here. A heuristic function is implemented in Novel, which can lead the search trajectory to explore the whole problem space. Two methods for handling inequality constraints, the Maxq and slack-variable methods, are compared in this research.
Experimental results are presented in the end. Issues like the impacts of constraint formulation, the weight on the objective function, etc. are discussed. We compare the performance of our designs with the best known solutions, and show the improvements of our designs.
Our SSAs for solving general constrained NLPs are based on the theory of discrete constrained optimization using Lagrange multipliers that shows the equivalence between the set of constrained local minima and the set of discrete-neighborhood saddle points (DSP). To implement this theory, we propose a general procedural framework for locating a DSP. By incorporating genetic algorithms in the framework, we evaluate new constrained search algorithms: constrained genetic algorithm (CGA) and combined constrained simulated annealing and genetic algorithm (CSAGA).
All these algorithms are SSAs. One of the most important and difficult issues in using SSAs is the scheduling of SSAs in order to optimize the average search efficiency. Our research shows that SSAs can be scheduled in such a way that minimizes their expected search time. The theory proposes to use iterative deepening to identify the optimal (up to a constant factor) schedules in such a way that minimizes the expected search time when compared to that of the same SSA run under an optimal schedule. We also extend the theory to identify optimal schedules in parallel processing of SSAs, and show that the search of an optimal parallel schedule is NP-complete. We propose to use iterative deepening to find a suboptimal parallel schedule and derive performance bounds between our suboptimal schedule and the optimal parallel schedule. The theory is general enough for SSAs and has been applied to CSA, CGA and CSAGA to develop optimal schedules for these algorithms.
Based on the optimal schedules, we propose optimal anytime SSAs that generate solutions of improved quality as more time is used. We propose new schedules to improve the quality levels in such a way that the total time of solving all quality levels is of the same order of magnitude as that of finding a constrained global optimum.
Finally, we apply our optimal anytime SSAs to solve a collection of engineering application benchmarks. Much better results have been reported in comparison with other existing methods.
In this thesis we study constraint relaxations of various nonlinear programming (NLP) algorithms in order to improve their performance. %for both unnoisy case and noisy case. For both stochastic and deterministic algorithms, we study the relationship between the expected time to find a feasible solution and the constraint relaxation level, build an exponential model based on this relationship, and develop a constraint relaxation schedule in such a way that the total time spent to find a feasible solution for all the relaxation levels is of the same order of magnitude as the time spent for finding a solution of similar quality using the last relaxation level alone.
When the objective and constraint functions are stochastic, we define new criteria of constraint satisfaction and similar constraint relaxation schedules. Similar to the case when functions are deterministic, we build an exponential model between the expected time to find a feasible solution and the associated constraint relaxation level. We develop an anytime constraint relaxation schedule in such a way that the total time spent to solve a problem for all constraint relaxation levels is of the same order of magnitude as the time spent for finding a feasible solution using the last relaxation level alone. Finally, we study the asymptotic behavior of our new algorithms and prove their asymptotic convergence, based on the theory of asymptotic convergence in simulated annealing and constrained simulated annealing.
In this thesis, we address the quality-delay trade-offs by using multiple description coding (MDC) and unequal error protection (UEP), together with the UDP protocol, to deliver JPEG 2000 images over the Internet. We first analyze Internet traffic data and determine a proper interleaving factor by which UDP packets are interleaved in order for losses to be below a certain threshold. We then propose an MDC scheme with a new reconstruction strategy carried out in the frequency domain, thereby avoiding sample-domain segmentation, which is a major cause for performance degradations in previous sample-domain MDC schemes. An optimized reconstruction-based transform for the new scheme is also derived and incorporated into our proposed MDC scheme. We examine the performance of our MDC algorithm under controlled-loss scenarios and real Internet trace-driven simulations, both of which perform quite well. We also study the quality-delay behavior of our proposed scheme under UDP and compare it with that of the traditional TCP delivery. The results show that this new MDC scheme can be an attractive alternative to the traditional approach.
Last, we study UEP schemes that pack code-blocks and FEC codes together in both serial and parallel styles. Based on this study, we propose a hybrid packing algorithm that applies serial or parallel packing for different parts of a code stream. For UEP schemes, we test our proposed hybrid packing algorithm under several measures, namely, PSNR, undecodable probability, and sensitivity. The results show that our proposed hybrid packing scheme is favorable as compared to that of TCP.
This research aims to address issues faced by real time video-conferencing systems in locating a perceptually optimal operating point under various network and conversational conditions. In order to determine the perceptually optimal operating point of a video-conferencing system, we must first be able to conduct a fair assessment of the quality of the current operating point in the system and compare it with another operating point to determine if one is better than the other in terms of perceptual quality. However at this point in time, there does not exist one objective quality metric that can accurately and fully describe the perceptual quality of a real time video conversation. Hence there is a need for a controlled environment to allow tests to be conducted in and in which we can study different metrics and identify the best trade-offs between them.
We begin by studying the components of a typical setup of a real time video-conferencing system and the impacts that various network and conversation conditions can have on the overall perceptual quality. We also look into different metrics available to measure those impacts. We then created a platform to perform black box testing on current video-conferencing systems and observe how they handle the changes in operating conditions. The platform is then used to conduct a brief evaluation of the performance of Skype, a popular commercial video conferencing system. However, we are not able to modify the system parameters of Skype.
The main contribution of this thesis is the design of a new testbed that provides a controlled environment to allow tests to be conducted to determine the perceptual optimum operating point of a video conversation under specified network and conversational conditions. This testbed will allow us to modify certain parameters, such as frame rate and frame size, which were not previously possible. The testbed takes as input, two recorded videos of the two speakers of a face-to-face conversation and desired output video parameters, such as frame rate, frame size and delay. A video generation algorithm is designed as part of the testbed to handle modifications to frame rate and frame size of the videos as well as delays inserted into the recorded video conversation to simulate the effects of network delays. The most important issue addressed is the generation of new frames to fill up the gaps created due to a change in frame rate or delay inserted, unlike as in the case of voice, where a period of silence can simply be used to handle these situations. The testbed uses a packetization strategy designed on the basis of an uneven packet transmission rate (UPTR) and that handles the packetization of interleaved video and audio data; it also uses piggybacking to provide redundancy if required. Losses can be injected either randomly or based on packet traces collected via PlanetLab. The processed videos will then be pieced together side-by-side to give the viewpoint of a third-party observing the video conversation from the site of the first speaker. Hence the first speaker will be observed to have a faster reaction time without network delays than that of the second speaker who is simulated to be located at the remote end. The video of the second speaker will also reflect the degradations in perceptual quality induced by the network conditions, whereas the first speaker will be of perfect quality. Hence with the testbed, we are able to generate output videos for different operating points under the same network and conversational conditions and thus able to make comparisons between two operating points. With the testbed in place, we demonstrate how it can be used to evaluate the effects of various parameters on the overall perceptual quality. Lastly, we demonstrate the results of applying an existing efficient search algorithm used for estimating the perceptually optimal mouth-to-ear delay (MED) of a Voice-over-IP(VoIP) conversation to a video conversation. This is achieved by using the testbed designed to conduct a series of subjective and objective tests to identify the perceptually optimal MED under specific network and conversational conditions.
Drawdown is the percentage loss of wealth from the previous peak. It is an important downside-risk metric to measure the performance of portfolio investment. Constructing a portfolio with small drawdowns and high returns is the consistent goal of all investors. However, great drawdowns are not rare in financial history, and they happened on stock indices. To the best of our knowledge, there are no effective and practical drawdown-reduction approaches.
In this thesis, we propose a novel pattern-based drawdown-reduction model to reduce future drawdowns of stock index investment. It firstly extracts two kinds of relevant features, i.e., ddFeature and priceFeature, from historical stock index price series, then trains a prediction model based on these pattern-and-drawdown pairs in history, and finally makes investment decisions according to a binary investment function. The effects of parameters involved with feature extraction and investment decision are well studied, and an adaptive investment method is also proposed. Experimental results on three different datasets, S&P 500, NASDAQ Composite and HSI, show that the proposed method achieves both small drawdowns as well as high cumulative return.