Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 835

Liczba wyników na stronie
first rewind previous Strona / 42 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  Algorithms
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 42 next fast forward last
W algorytmie prezentowanym w artykule jest wykorzystywana metoda średniego przesunięcia oszacowań maksimów lokalnych funkcji gęstości wektora losowego, zaproponowana przez Comaniciu i Meera. (fragment tekstu)
W pracy analizowano metodę dekodowania korekcyjnego kodów dwukrotnie iterowanych na bazie kodów Hamminga. Podano reguły kodowania oraz algorytm dekodowania korekcyjnego umożliwiający poprawną korekcję błędów o krotności mniejszej od pięciu. Opracowany algorytm dekodowania bazuje na trzech rodzajach korekcji: korekcja na podstawie syndromów kolumn KOR_SK, korekcja na podstawie syndromów wierszy KOR_SW, i korekcja na podstawie liczby syndromów różnych od zera KOR_LS_22. W dwóch pierwszych przypadkach są to klasyczne korekcje błędów dla kodów Hamminga, w których syndrom ciągu wskazuje w NKB korygowaną pozycję. W trzecim przypadku korekcja jest przeprowadzana na czterech pozycjach wyznaczonych przez numery wierszy i kolumn, w których syndromy są różne od zera. W algorytmie dekodowania występują dwie fazy: w pierwszej wyznacza się wszystkie syndromy wierszy i kolumn oraz oblicza się liczbę wierszy LSw i kolumn LSk z syndromami różnymi od zera. W zależności od wartości LSw i LSk wykonywana jest odpowiednia korekcja błędów. W drugiej fazie dekodowania ponownie wyznacza się wszystkie syndromy wierszy i kolumn i przeprowadza się korekcję KOR_SW lub KOR_SK w zależności od tego jaki rodzaj korekcji był przeprowadzony w fazie pierwszej. Przedstawiono analizę działania dekodera korekcyjnego dla wszystkich możliwych błędów o krotności mniejszej od pięciu, wyznaczono takie położenia błędów o krotności pięć, które prowadzą do błędnej decyzji dekodera.(abstrakt oryginalny)
W artykule analizowane są jednoprocesorowe problemy harmonogramowania z efektem uczenia i zużycia (starzenia) przy następujących kryteriach minimalizacji: długość uszeregowania zadań z terminami ich dostępności, suma czasów zakończenia wykonywania zadań, maksymalna nieterminowość zadań oraz liczba opóźnionych zadań. Efekt uczenia jest rozumiany jako proces nabywania doświadczenia przez procesor, który prowadzi do skrócenia czasów wykonywania kolejnych zadań. Natomiast efekt zużycia (starzenia) powoduje obniżenie efektywności procesora. Mierzalnym rezultatem jest wydłużenie czasów wykonywania zadań. W pracy wykazano szereg własności badanych problemów, które pozwalają na konstrukcję wielomianowych optymalnych algorytmów rozwiązania dla szczególnych przypadków tychże problemów. (abstrakt oryginalny)
W pracy przedstawiono algorytm wyznaczania maksymalnego przepływu wielotowarowego w oparciu o algorytm Dinica wyznaczania maksymalnego przepływu jedno-towarowego. Algorytmem Dinica wyznaczono ścieżki i płynące nimi strumienie między każda parą s-t. Idea zaprezentowanego algorytmu polega na umożliwieniu przepływu towarów krawędziami wchodzącymi w skład wyznaczonych ścieżek w sposób zrównoważony. Ograniczeniu podlegają jedynie strumienie o największych wartościach. (abstrakt oryginalny)
In this investigation recently developed InterCriteria Analysis (ICA) is applied to examine the influences of two main genetic algorithms parameters - crossover and mutation rates during the model parameter identification of S. cerevisiae and E. coli fermentation processes. The apparatuses of index matrices and intuitionistic fuzzy sets, which are the core of ICA, are used to establish the relations between investigated genetic algorithms parameters, from one hand, and fermentation process model parameters, from the other hand. The obtained results after ICA application are analysed towards convergence time and model accuracy and some conclusions about derived interactions are reported. (original abstract)
Technologically inevitable introduction of various kinds of sensors to our life resulted in the production of huge amount of data delivered as streams. An improper acquisition of information may lead to errors caused by mixing observations coming from different processes threads. Some remedy can bring a proper representation of information. Hence, this paper introduces a graph-stream structure representing performance of complex multi-threaded process. The proposed network representation can separate information describing multiple threads and allows for modeling causal relationships between them. It gives separated and segregated information opening opportunity for development of qualitatively better and simpler knowledge retrieval algorithms. Further, the paper delivers a method for this representation extraction from multivariate data stream. It would be done by a clustering algorithm particularly designed for this purpose and evaluated quantitatively and qualitatively on example sets of data.(original abstract)
Recursive Filters (RFs) are a well known way to approximate the Gaussian convolution and are intensively used in several research fields. When applied to signals with support in a finite domain, RFs can generate distortions and artifacts, mostly localized at the boundaries of the computed solution. To deal with this issue, heuristic and theoretical end conditions have been proposed in literature. However, these end conditions strategies do not consider the case in which a Gaussian RF is applied more than once, as often happens in several realistic applications. In this paper, we suggest a way to use the end conditions for such a K-iterated Gaussian RF and propose an algorithm that implements the described approach. Tests and numerical experiments show the benefit of the proposed scheme.(original abstract)
The stochastic simplex bisection (SSB) algorithm is evaluated against the collection of optimizers in the Python SciPy.Optimize module on a prominent test set. The SSB algorithm greatly outperforms all SciPy optimizers, save one, in exactly half the cases. It does slightly worse on quadratic functions, but excels at trigonometric ones, highlighting its multimodal prowess. Unlike the SciPy optimizers, it sustains a high success rate. The SciPy optimizers would benefit from a more informed metaheuristic strategy and the SSB algorithm would profit from quicker local convergence and better multidimensional capabilities. Conversely, the local convergence of the SciPy optimizers is impressive and the multimodal capabilities of the SSB algorithm in separable dimensions are uncanny.(original abstract)
This paper describes a face recognition algorithm using feature points of face parts, which is classified as a feature-based method. As recognition performance depends on the combination of adopted feature points, we utilize all reliable feature points effectively. From moving video input, well-conditioned face images with a frontal direction and without facial expression are extracted. To select such well-conditioned images, an iteratively minimizing variance method is used with variable input face images. This iteration drastically brings convergence to the minimum variance of 1 for a quarter to an eighth of all data, which means 3.75-7.5 Hz by frequency on average. Also, the maximum interval, which is the worst case, between the two values with minimum deviation is about 0.8 seconds for the tested feature point sample.(original abstract)
An algorithm for extracting material shape and spatial information from non-uniform background, and for generating object skeletons for statistical two-dimensional experiments using random walk approach, is presented. This finds applications in textile analysis and microscopic analysis of various materials like hairs and allows for further precise determination of textile yarn dimensions as well as other geometrical characteristics like a fractal dimension. (original abstract)
Floating-point additions in concurrent execution environment are known to be hazardous, as the result depends on the order in which operations are performed. This problem is encountered in data parallel execution environments such as GPUs, where reproducibility involving floating-point atomic addition is challenging. This problem is due to the rounding error or cancellation that appears for each operation, combined with the lack of control over execution order. In this article we propose two solutions to address this problem: work reassignment and fixed-point accumulation. Work reassignment consists in enforcing an execution order that leads to weak reproducibility. Fixed-point accumulation consists in avoiding rounding errors altogether thanks to a long accumulator and enables strong reproducibility.(original abstract)
The HUGO project, published at 2010, can be considered as one of the most promising direction in the design of highly undetectable steganography. The main idea of that approach is to minimise the embedding impact from the steganalysis point of view. This goal is achieved by using trellis codes in the embedding procedure, the Viterbi algorithm (VA) and the SPAM features. But the optimality of VA was kept still unclear because a generic purpose of VA is to correct errors with trellis codes instead of embedding secret information. The first goal of the current paper is to prove the optimality of VA application in its generalised form proposed about 30 years ago by one of the authors of this paper. The second goal is to optimise the parameters of the trellis code check matrix for better undetectability of stegosystems.(original abstract)
In this paper, the problem of determining the most effective server placement in the hypercube network structure was considered. The algorithm consisting of two stages: first stage for the server placement and the second for generating the appropriate communication structure was described. The correctness of the algorithm has been verified through simulation tests, prepared and implemented in Riverbed Modeler environment. The results of these tests for exemplary structures were presented. Some properties of the server placement in the 4-dimensional hypercube network with soft degradation were investigated(original abstract)
W artykule przedstawiono możliwość zastosowania algorytmu roju cząstek do rozwiązywania problemu układania tras pojazdów. Przedstawiony algorytm zaimplementowano w autorskiej aplikacji komputerowej i testowano na przykładzie wieloszczeblowego systemu dystrybucji.(abstrakt oryginalny)
15
Content available remote A One-Pass Heuristic for Nesting Problems
100%
A two-dimensional cutting (packing) problem with items of irregular shape and rectangular sheets is studied. Three types of problems are considered: single-sheet problems without restrictions on the number of elements, single-sheet problems with restrictions on the number of elements, and cutting stock problems (restricted number of items and unrestricted number of sheets). The aim of the optimization is to maximize the total area of the elements cut from a single plate or to minimize the number of sheets used in cutting. A one-pass algorithm is proposed which uses the popular concept of a no-fit polygon (NFP). The decision on whether an item is cut from a sheet in a given step depends on the value of a fitting function. The fitting function depends on the change in the NFP of individual items. We test eight different criteria for the evaluation of partial solutions. On the basis of numerical experiments, the algorithm that generates the best solution for each of the considered problem types is selected. The calculation results for these algorithms are compared with results obtained by other authors. (original abstract)
In this work we present the results of design of smart dust sensor platform for combustible gas leakage monitoring. During the design process we took into account a lot of combustible gas sensor specific problems such as their huge power consumption, the necessity to work in explosive environment and sensor parameters degradation. To decrease power consumption we designed specific energy efficient algorithms for measurements. The resulting average power consumption of the node is low enough for one year autonomous lifetime. The methods and algorithms which was designed are very promissing for catalytic combustible gas sensors. (original abstract)
Aggregating indicators are the numerical characteristics of objects and processes, reflecting their global properties, which often defy strict formalization. The problem of calculation of aggregating indicators arises in many branches of social science, economics, and geography. In this paper we introduce a new method, which uses several simple quantitative characteristics to construct a rating aggregating indicator. The evolutionary algorithm, underlying our method, doesn't use a provided formula or function to optimize, thus guaranteeing unbiased results. Moreover, the evolutionary algorithm takes into account modest effects, annihilated by factorial analysis. We illustrate the method calculating the ratings of innovation potential of Russian regions. (original abstract)
Java provides two different options for processing source code annotations. One of them is the annotation processing API used in compile time, and the other is the Reflection API used in runtime. Both options provide different API for accessing program metamodel. In this paper, we examine the differences between those representations and we discuss options on how to unify these models along with advantages and disadvantages of this approach. Based on this proposal, we design a unified Java language model and present a prototype tool which can populate a unified model during both compilation and runtime. The paper includes the designed API of this unified language model. To verify our approach, we have performed experiments to show the usability of the unified metamodel. (original abstract)
Artykuł zawiera porównanie metod analizy koszykowej na przykładzie transakcyjnej bazy danych. W publikacji przedstawione zostały poszczególne etapy przygotowania danych oraz analizy za pomocą oprogramowania Statistica i SPSS Clementine. Zestawienie podstawowych charakterystyk metod a priori oraz GRI pozwala na wybór odpowiedniego algorytmu w zależności od typu danych oraz ilości danych wejściowych.(abstrakt oryginalny)
Omówiono dwa główne podejścia do rozwiązania problemu oszacowania tj. wycena i selekcja. Zaprezentowano technikę selekcji opartą na algorytmie “short-list” i zaproponowano algorytm do rozwiązania problemu wyceny. (AŁ)
first rewind previous Strona / 42 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.