Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 57

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  Statistical models
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
The paper is a presentation of statistical models showing how the volume of the extracted oil depends on outlays. The following problem has been solved on the basis of the estimated models: The Amalgamated Oil Industries are to allocate K million zloties among parti¬cular extracting regions in an optimum way, i.e. the total output of oil in these regions should reach the highest level. How high should the sums of money on hand be in oder to allocate them to particular regions with the view to obtaining this target? The problem belongs to parametric programming as the optimum solution depends on the variable parameter K. (original abstract)
Główną tezą niniejszego artykułu jest twierdzenie, że model SIR, mimo pewnych ograniczeń, jest bardzo pomocnym narzędziem matematycznym w ocenie przebiegu epidemii. Udowodnienie przyjętej tezy ułatwi uzyskanie odpowiedzi na dwa pytania badawcze: - jak długo według prognozy modelu SIR potrwa pandemia od jej wystąpienia? - jaka według prognozy modelu SIR może być skala zachorowań i kiedy może nastąpić ich szczyt? Pierwsza część pracy zawiera ogólną charakterystykę modeli epidemii SIR i SEIR. W części drugiej omówiono założenia i strukturę modelu SIR wykorzystanego do analizy rozprzestrzeniania się wirusa SARS-CoV-2 w Polsce. Część trzecia prezentuje przewidywany przebieg epidemii koronawirusa w Polsce według wyników modelu SIR, czwarta - według prognoz krajowych i zagranicznych badaczy, natomiast piąta - według oficjalnych danych Ministerstwa Zdrowia. Opracowanie kończy podsumowanie wcześniejszych rozważań oraz ważniejszych wniosków. (fragment tekstu)
The EU Statistics on Income and Living Conditions (EU-SILC) has provided annual esti-mates of the number of labour market indicators for EU countries since 2003, with an almostexclusive focus on national rates. However, it is impossible to obtain reliable direct estimatesof labour market statistics at low levels based on the EU-SILC survey. In such cases, model-based small area estimation can be used. In this paper, the low work intensity indicator forthe spatial domains in Poland between 2005-2012 was estimated. The Rao and You (1994),Fay and Diallo (2012), and Marhuenda, Molina and Morales (2013) models were applied.The bootstrap MSE for the discussed methods was proposed. The results indicate that thesemodels provide more reliable estimates than direct estimation.(original abstract)
In this article, a new reciprocal Rayleigh extension called the Xgamma reciprocal Rayleigh model is defined and studied. The relevant statistical properties are derived, and the useful results related to the convexity and concavity are addressed. We discussed the estimation of the parameters using different estimation methods such as the maximum likelihood estimation method, the ordinary least squares estimation method, the weighted least squares estimation method, the Cramer-Von-Mises estimation method, and the bootstrapping method. A simulation study was conducted to assess the performances of the proposed estimation methods are investigated through a simulation study. Many bivariate and multivariate type model have also been derived based on Farlie-Gumbel-Morgenstern copula, the Clayton copula, Renyi's entropy copula and the Ali-Mikhail-Haq copula. A modified Nikulin-Rao-Robson test for right-censored validation is applied to a censored real data set.(original abstract)
This article defines the Autoregressive Fractional Unit Root Integrated Moving Average (ARFURIMA) model for modelling ILM time series with fractional difference value in the interval of 1 < d < 2. The performance of the ARFURIMA model is examined through a Monte Carlo simulation. Also, some applications were presented using the energy series, bitcoin exchange rates and some financial data to compare the performance of the ARFURIMA and the Semiparametric Fractional Autoregressive Moving Average (SEMIFARMA) models. Findings showed that the ARFURIMA outperformed the SEMIFARMA model. The study's conclusion provides another perspective in analysing large time series data for modelling and forecasting, and the findings suggest that the ARFURIMA model should be applied if the studied data show a type of ILM process with a degree of fractional difference in the interval of 1 < d < 2. (original abstract)
One of the Sustainable Development Goals (Goal 6) set by the United Nations is to provide people with access to water and sanitation through sustainable water resources management. Water supply companies carrying out tasks commissioned by local authorities ensure there is an optimal amount of water in the water supply system. The aim of this study is to present the results of the work on a statistical model which determined the influence of individual atmospheric factors on the demand for water in the city of Lodz, Poland, in 2010-2019. In order to build the model, the study used data from the Water Supply and Sewage System Company (Zakład Wodociągów i Kanalizacji Sp. z o.o.) in the city of Lodz complemented with data on weather conditions in the studied period. The analysis showed that the constructed models make it possible to perform a forecast of water demand depending on the expected weather conditions. (original abstract)
We present a simple yet effective variable selection method for the two-fold nested subarea model, which generalizes the widely-used Fay-Herriot area model. The twofold subarea model consists of a sampling model and a linking model, which has a nested-error model structure but with unobserved responses. To select variables under the two-fold subarea model, we first transform the linking model into a model with the structure of a regular regression model and unobserved responses. We then estimate an information criterion based on the transformed linking model and use the estimated information criterion for variable selection. The proposed method is motivated by the variable selection method of Lahiri and Suntornchost (2015) for the Fay-Herriot model and the variable selection method of Li and Lahiri (2019) for the unit-level nested-error regression model. Simulation results show that the proposed variable selection method performs significantly better than some naive competitors, especially when the variance of the area-level random effect in the linking model is large. (original abstract)
This paper explored the determinants of the informal economy size estimations with survey data in the multiple indicators, multiple causes (MIMIC) model. This model enables us to estimate the unknown variable with the known observable variables. The size of the informal economy estimated with observable variables and to conduct the estimation with this model grouped the observable variables of the study as causes and indicators. In the underlying study, the size of informal economy estimations the variables such as harmfulness of shadow economy, growth of money outside banks, taxes burden, the intensity of government regulations, self-employment, unemployment rate, and agricultural sector dominance have the positive effects whereas the real GDP per capita, total employment, institutional quality, and tax morality have negative effects in the estimation of the informal economy size. The study recommended a future line of studies for scholars to undertake the study on the size of the informal economy estimations with the indirect approach using panel data to know the impacts on the regular economy and other related consequences on the economy. (original abstract)
9
61%
Ryzyko niewypłacalności podmiotów (default) jest krytyczne w działalności bankowej. W literaturze przedmiotu opracowano różne modele bazujące na analizie dyskryminacyjnej, regresji logistycznej i technikach data mining. W artykule zastosowano regresję logistyczną do weryfikacji skuteczności modelu zaproponowanego przez R. Jagiełłę dla różnych sektorów. Jako alternatywę zaproponowano model regresji logistycznej ze zmienną nominalną SEKTOR na łącznej próbie danych. Oszacowano dynamiczny model przeżycia - model Coxa. Włączenie do modelu zmiennej nominalnej SEKTOR tylko nieznacznie zwiększa moc dyskryminacyjną modelu (w obszarze default). Moc dyskryminacyjna modelu Coxa jest niższa, z wyjątkiem klasyfikacji podmiotów w sytuacji default, w której wyższa trafność klasyfikacji stanowi przewagę modelu Coxa.(abstrakt oryginalny)
Działalność gospodarcza oceniana jest często przy użyciu wskaźników ilorazowych i charakteryzujących stosunek uzyskiwanych efektów do ponoszonych nakładów lub odwrotnie - stosunek ponoszonych nakładów do uzyskiwanych efektów. Do opisu tego rodzaju zagadnień ekonomicznych mogą być wykorzystane optymalizacyjne modele liniowe z ułamkowo-liniową funkcją celu.(fragment tekstu)
Nicolaie et al. (2010) have advanced a vertical model as the latest continuous time competing risks model. The main objective of this article is to re-cast this model as a nonparametric model for analysis of discrete time competing risks data. Davis and Lawrance (1989) have advanced a cause-specific-hazard driven method for summarizing discrete time data nonparametrically. The secondary objective of this article is to compare the proposed model to this model. We pay particular attention to the estimates for the cause-specific-hazards and the cumulative incidence functions as well as their respective standard errors. (original abstract)
Small area estimation methods have become a widely used tool to provide accurate estimates for regional indicators such as poverty measures. Recent research has provided evidence that spatial modelling still can improve the precision of regional and local estimates. In this paper, we provide an intrinsic spatial autocorrelation model and prove the propriety of the posterior under a flat p rior. F urther, we show using the SAIPE poverty data that the gain in efficiency using a spatial model can be essentially important in the presence of a lack of strong auxiliary variables. (original abstract)
Ethiopia has one of the largest livestock populations in Africa. In 2016-2017, the share of live animals, leather, and meat in the total export of the country reached 9.6%. This paper aims to identify the determinants of the export of Ethiopian livestock products by means of vector autoregressive and vector error correction models. Multivariate time series is used to model the association between the products of the Ethiopian livestock export included in the study. Vector autoregressive and vector error correction models are used for modelling and inference. The results indicated the existence of a long term correlation between the volume of live animals, meat and leather exports. The volume of meat export is significantly affected by a lag occurring in the export of live animals in the short-run. Therefore, 3.7% of the shortrun imbalance in the volume of leather export is adjusted each quarter. It is suggested that the exporters of livestock products should properly utilise the Ethiopian livestock resources. On the other hand, the government should offer different forms of support to exporters, especially those focusing on exporting value-added products. (original abstract)
Jednym z problemów w badaniach ekonomiczno-społecznych jest oszacowanie odsetka odpowiedzi na pytania drażliwe. Pytania drażliwe są takimi pytaniami odpowiedzi na które respondent może nie udzielić rzetelnej odpowiedzi. W pracy podano konstrukcję dokładnego przedziału ufności dla tego odsetka i porównano zaproponowany przedział z przedziałami asymptotycznymi.(abstrakt oryginalny)
Jednym z problemów w badaniach ankietowych jest szacowanie odsetka odpowiedzi na pytania drażliwe. Wymaga on stosowania specjalnych modeli odpowiedzi. Pierwszym tego typu modelem był model zrandomizowanych odpowiedzi [Warner 1965]. Obecnie istnieje wiele innych modeli. Wśród nich znajduje się krzyżowy model niezrandomizowanych odpowiedzi [Yu i in. 2008]. Estymator odsetka pytań drażliwych w tym modelu może przyjmować wartości spoza przedziału á0, 1ñ. W pracy zostanie pokazany wpływ, jaki na własności tego estymatora ma jego zawężenie do przedziału á0, 1ñ.(abstrakt oryginalny)
In biomedical research, challenges to working with multiple events are often observed while dealing with time-to-event data. Studies on prolonged survival duration are prone to having numerous possibilities. In studies on prolonged survival, patients might die of other causes. Sometimes in the survival studies, patients experienced some events (e.g. cancer relapse) before dying within the study period. In this context, the semi-competing risks framework was found useful. Similarly, the prolonged duration of follow-up studies is also affected by censored observation, especially interval censoring, and right censoring. Some conventional approaches work with time-to-event data, like the Cox-proportional hazard model. However, the accelerated failure time (AFT) model is more effective than the Cox model because it overcomes the proportionality hazard assumption. We also observed covariates impacting the time-to-event data measured as the categorical format. No established method currently exists for fitting an AFT model that incorporates categorical covariates, multiple events, and censored observations simultaneously. This work is dedicated to overcoming the existing challenges by the applications of R programming and data illustration. We arrived at a conclusion that the developed methods are suitable to run and easy to implement in R software. The selection of covariates in the AFT model can be evaluated using model selection criteria such as the Deviance Information Criteria (DIC) and Log-pseudo marginal likelihood (LPML). Various extensions of the AFT model, such as AFT-DPM and AFT-LN, have been demonstrated. The final model was selected based on minimum DIC values and larger LPML values. (original abstract)
Current demographic changes require greater participation of people aged 50 or older in the labour market. Previous research shows that the chances of returning to employment decrease with the length of the unemployment period. In the case of older people who have not reached the statutory retirement age, these chances also depend on the time they have left to retirement. Our study aims to assess the probability of leaving unemployment for people aged 50-71 based on their characteristics and the length of the unemployment period. We use data from the Labour Force Survey for 2019-2020. The key factors determining employment status are identified using the proportional hazard model. We take these factors into account and use the direct adjusted survival curve to show how the probability of returning to work in Poland changes as people age. Due to the fact that not many people take up employment around their retirement age, an in-depth evaluation of the accuracy of predictions obtained via the models is crucial to assess the results. Hence, in this paper, a time-dependent ROC curve is used. Our results indicate that the key factor that influences the return to work after an unemployment period in the case of older people in Poland is whether they reached the age of 60. Other factors that proved important in this context are the sex and the education level of older people. (original abstract)
A speed-density model can be utilised to efficiently flow pedestrians in a network. However, how each model measures and optimises the performance of the network is rarely reported. Thus, this paper analyses and optimises the flow in a topological network using various speed-density models. Each model was first used to obtain the optimal arrival rates to all individual networks. The optimal value of each network was then set as a flow constraint in a network flow model. The network flow model was solved to find the optimal arrival rates to the source networks. The optimal values were then used to measure their effects on the performance of each available network. The performance results of the model were then compared with thatof other speed-density models. The analysis of the results can help decision-makers understand how arrival rates propagate through traffic and determine the level of the network throughputs. (original abstract)
Modelling the covariance matrix in linear mixed models provides an additional advantage in making inference about subject-specific effects, particularly in the analysis of repeated measurement data, where time-ordering of the responses induces significant correlation. Some difficulties encountered in these modelling procedures include high dimensionality and statistical interpretability of parameters, positive definiteness constraint and violation of model assumptions. One key assumption in linear mixed models is that random errors and random effects are independent, and its violation leads to biased and inefficient parameter estimates. To minimize these drawbacks, we developed a procedure that accounts for correlations induced by violation of this key assumption. In recent literature, variants of Cholesky decomposition were employed to circumvent the positive definiteness constraint, with parsimony achieved by joint modelling of mean and covariance parameters using covariates. In this article, we developed a linear Cholesky decomposition of the random effects covariance matrix, providing a framework for inference that accounts for correlations induced by covariate(s) shared by both fixed and random effects design matrices, a circumstance leading to lack of independence between random errors and random effects. The proposed decomposition is particularly useful in parameter estimation using the maximum likelihood and restricted/residual maximum likelihood procedures. (original abstract)
Zgodnie z nowymi dyrektywami międzynarodowego nadzoru finansowego (MSSF9) banki powinny przyjrzeć się nowemu zestawowi narzędzi analitycznych, takich jak uczenie maszynowe. Wprowadzenie tych metod do praktyki bankowej wymaga przeformułowania celów biznesowych, zarówno w zakresie trafności przewidywań, jak i definicji czynników ryzyka. W artykule porównano metody selekcji zmiennych i przypisania "ważności" w modelach statystycznych i algorytmicznych. Obliczenia przeprowadzono na przykładzie klasyfikacji danych finansowych. Na wybranych zbiorach zmiennych porównano skuteczność różnych algorytmów uczenia maszynowego. Wyniki analiz wskazują na potrzebę rewizji koncepcji "ważności" zmiennej, tak aby nie była ona zależna od struktury modelu.(abstrakt oryginalny)
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.